Graeme Forbes (2011) raises some problems for two-dimensional semantic theories. The problems concern nested environments: linguistic environments where sentences are nested under both modal and epistemic operators. Closely related problems involving nested environments have been raised by Scott Soames (2005) and Josh Dever (2007). Soames goes so far as to say that nested environments pose the “chief technical problem” for strong two-dimensionalism. We call the problem of handling nested environments within two-dimensional semantics “the nestingproblem”. We (...) show that the two-dimensional semantics for attitude ascriptions developed in Chalmers (2011a) has no trouble accommodating certain forms of the nestingproblem that involve factive verbs such as “know” or “establish”. A certain form of the nestingproblem involving apriority and necessity operators does raise an interesting puzzle, but we show how a generalized version of the nestingproblem arises independently of two-dimensional semantics—it arises, in fact, for anyone who accepts the contingent a priori. We, then, provide a two-dimensional treatment of the apriority operator that fits the two-dimensional treatment of attitude verbs and apply it to the generalized nestingproblem. We conclude that two-dimensionalism is not seriously threatened by cases involving the nesting of epistemic and modal operators. (shrink)
Many expressions intuitively have different epistemic and modal profiles. For example, co-referring proper names are substitutable salva veritate in modal contexts but not in belief-contexts. Two-dimensional semantics, according to which terms have both a so-called primary and a secondary intension, is a framework that promises to accommodate and explain these diverging intuitions. The framework can be applied to indexicals, proper names or predicates. Graeme Forbes argues that the two-dimensional semantics of David Chalmers fails to account for so-called nested contexts. These (...) are linguistic contexts where a sentence is embedded under both epistemic and modal operators. Chalmers and Rabern suggest a two-dimensional solution to the problem. Their semantics solves the nesting-problem, but at the cost of invalidating certain plausible principles. We suggest a solution that is both simpler and avoids this cost. (shrink)
According to the modal account of propositional apriority, a proposition is a priori if it is possible to know it with a priori justification. Assuming that modal truths are necessarily true and that there are contingent a priori truths, this account has the undesirable consequence that a proposition can be a priori in a world in which it is false. Epistemic two-dimensionalism faces the same problem, since on its standard interpretation, it also entails that a priori propositions are necessarily (...) a priori. In response to this problem, Chalmers and Rabern propose an alternative conception of propositional apriority as well as two-dimensional truth-conditions for apriority statements. Their proposal is also supposed to avoid another problem for the modal account, namely that it entails the existence of false instances of ‘φ iff actually φ’. I discuss Chalmers and Rabern’s account and point out a number of problems with it. I then develop my own account of propositional apriority that solves the problems in question, that can be accepted by friends and foes of two-dimensionalism alike, and that is also neutral with respect to the question of how one construes the objects of propositional apriority. (shrink)
Epistemic two-dimensional semantics is a theory in the philosophy of language that provides an account of meaning which is sensitive to the distinction between necessity and apriority. While this theory is usually presented in an informal manner, I take some steps in formalizing it in this paper. To do so, I define a semantics for a propositional modal logic with operators for the modalities of necessity, actuality, and apriority that captures the relevant ideas of epistemic two-dimensional semantics. I also describe (...) some properties of the logic that are interesting from a philosophical perspective, and apply it to the so-called nestingproblem. (shrink)
This paper is concerned with a propositional modal logic with operators for necessity, actuality and apriority. The logic is characterized by a class of relational structures defined according to ideas of epistemic two-dimensional semantics, and can therefore be seen as formalizing the relations between necessity, actuality and apriority according to epistemic two-dimensional semantics. We can ask whether this logic is correct, in the sense that its theorems are all and only the informally valid formulas. This paper gives outlines of two (...) arguments that jointly show that this is the case. The first is intended to show that the logic is informally sound, in the sense that all of its theorems are informally valid. The second is intended to show that it is informally complete, in the sense that all informal validities are among its theorems. In order to give these arguments, a number of independently interesting results concerning the logic are proven. In particular, the soundness and completeness of two proof systems with respect to the semantics is proven (Theorems 2.11 and 2.15), as well as a normal form theorem (Theorem 3.2), an elimination theorem for the actuality operator (Corollary 3.6), and the decidability of the logic (Corollary 3.7). It turns out that the logic invalidates a plausible principle concerning the interaction of apriority and necessity; consequently, a variant semantics is briefly explored on which this principle is valid. The paper concludes by assessing the implications of these results for epistemic two-dimensional semantics. (shrink)
The “demarcation problem,” the issue of how to separate science from pseu- doscience, has been around since fall 1919—at least according to Karl Pop- per’s (1957) recollection of when he first started thinking about it. In Popper’s mind, the demarcation problem was intimately linked with one of the most vexing issues in philosophy of science, David Hume’s problem of induction (Vickers 2010) and, in particular, Hume’s contention that induction cannot be logically justified by appealing to the fact (...) that “it works,” as that in itself is an inductive argument, thereby potentially plunging the philosopher straight into the abyss of a viciously circular argument. (shrink)
Ever since Socrates, philosophers have been in the business of asking ques- tions of the type “What is X?” The point has not always been to actually find out what X is, but rather to explore how we think about X, to bring up to the surface wrong ways of thinking about it, and hopefully in the process to achieve an increasingly better understanding of the matter at hand. In the early part of the twentieth century one of the most (...) ambitious philosophers of sci- ence, Karl Popper, asked that very question in the specific case in which X = science. Popper termed this the “demarcation problem,” the quest for what distinguishes science from nonscience and pseudoscience (and, presumably, also the latter two from each other). (shrink)
One of the reasons why most of us feel puzzled about the problem of abortion is that we want, and do not want, to allow to the unborn child the rights that belong to adults and children. When we think of a baby about to be born it seems absurd to think that the next few minutes or even hours could make so radical a difference to its status; yet as we go back in the life of the fetus (...) we are more and more reluctant to say that this is a human being and must be treated as such. No doubt this is the deepest source of our dilemma, but it is not the only one. For we are also confused about the general question of what we may and may not do where the interests of human beings conflict. We have strong intuitions about certain cases; saying, for instance, that it is all right to raise the level of education in our country, though statistics allow us to predict that a rise in the suicide rate will follow, while it is not all right to kill the feeble-minded to aid cancer research. It is not easy, however, to see the principles involved, and one way of throwing light on the abortion issue will be by setting up parallels involving adults or children once born. So we will be able to isolate the “equal rights” issue and should be able to make some advance... (shrink)
J.L. Mackie’s version of the logical problem of evil is a failure, as even he came to recognize. Contrary to current mythology, however, its failure was not established by Alvin Plantinga’s Free Will Defense. That’s because a defense is successful only if it is not reasonable to refrain from believing any of the claims that constitute it, but it is reasonable to refrain from believing the central claim of Plantinga’s Free Will Defense, namely the claim that, possibly, every essence (...) suffers from transworld depravity. (shrink)
In its original form, Nozick’s experience machine serves as a potent counterexample to a simplistic form of hedonism. The pleasurable life offered by the experience machine, its seems safe to say, lacks the requisite depth that many of us find necessary to lead a genuinely worthwhile life. Among other things, the experience machine offers no opportunities to establish meaningful relationships, or to engage in long-term artistic, intellectual, or political projects that survive one’s death. This intuitive objection finds some support in (...) recent research regarding the psychological effects of phenomena such as video games or social media use. After a brief discussion of these problems, I will consider a variation of the experience machine in which many of these deficits are remedied. In particular, I’ll explore the consequences of a creating a virtual world populated with strongly intelligent AIs with whom users could interact, and that could be engineered to survive the user’s death. The presence of these agents would allow for the cultivation of morally significant relationships, and the world’s long-term persistence would help ground possibilities for a meaningful, purposeful life in a way that Nozick’s original experience machine could not. While the creation of such a world is obviously beyond the scope of current technology, it represents a natural extension of the existing virtual worlds provided by current video games, and it provides a plausible “ideal case” toward which future virtual worlds will move. While this improved experience machine would seem to represent progress over Nozick’s original, I will argue that it raises a number of new problems stemming from the fact that that the world was created to provide a maximally satisfying and meaningful life for the intended user. This, in turn, raises problems analogous in some ways to the problem(s) of evil faced by theists. In particular, I will suggest that it is precisely those features that would make a world most attractive to potential users—the fact that the AIs are genuinely moral agents whose well-being the user can significantly impact—that render its creation morally problematic, since they require that the AIs inhabiting the world be subject to unnecessary suffering. I will survey the main lines of response to the traditional problem of evil, and will argue that they are irrelevant to this modified case. I will close by considering by consider what constraints on the future creation of virtual worlds, if any, might serve to allay the concerns identified in the previous discussion. I will argue that, insofar as the creation of such worlds would allow us to meet morally valuable purposes that could not be easily met otherwise, we would be unwise to prohibit it altogether. However, if our processes of creation are to be justified, they must take account of the interests of the moral agents that would come to exist as the result of our world creation. (shrink)
I resolve the major challenge to an Expressivist theory of the meaning of normative discourse: the Frege–Geach Problem. Drawing on considerations from the semantics of directive language (e.g., imperatives), I argue that, although certain forms of Expressivism (like Gibbard’s) do run into at least one version of the Problem, it is reasonably clear that there is a version of Expressivism that does not.
The philosophical study of consciousness is chock full of thought experiments: John Searle’s Chinese Room, David Chalmers’ Philosophical Zombies, Frank Jackson’s Mary’s Room, and Thomas Nagel’s ‘What is it like to be a bat?’ among others. Many of these experiments and the endless discussions that follow them are predicated on what Chalmers famously referred as the ‘hard’ problem of consciousness: for him, it is ‘easy’ to figure out how the brain is capable of perception, information integration, attention, reporting on (...) mental states, etc, even though this is far from being accomplished at the moment. What is ‘hard’, claims the man of the p-zombies, is to account for phenomenal experience, or what philosophers usually call ‘qualia’: the ‘what is it like’, first-person quality of consciousness. (shrink)
Here I discuss some theistic responses to the problem of animal pain and suffering with special attention to Michael Murray’s presentation in Nature Red in Tooth and Claw. The neo-Cartesian defenses he describes are reviewed, along with the appeal to nomic regularity and Murray’s emphasis on the progression of the universe from chaos to order. It is argued that despite these efforts to prove otherwise the problem of animal suffering remains a serious threat to the belief that an (...) all-powerful, all-knowing, and all-good creator exists. (shrink)
In this paper, I argue that there is a kind of evil, namely, the unequal distribution of natural endowments, or natural inequality, which presents theists with a new evidential problem of evil. The problem of natural inequality is a new evidential problem of evil not only because, to the best of my knowledge, it has not yet been discussed in the literature, but also because available theodicies, such the free will defense and the soul-making defense, are not (...) adequate responses in the face of this particular evil, or so I argue. (shrink)
This is an opinionated overview of the Frege-Geach problem, in both its historical and contemporary guises. Covers Higher-order Attitude approaches, Tree-tying, Gibbard-style solutions, and Schroeder's recent A-type expressivist solution.
My primary aim is to defend a nonreductive solution to the problem of action. I argue that when you are performing an overt bodily action, you are playing an irreducible causal role in bringing about, sustaining, and controlling the movements of your body, a causal role best understood as an instance of agent causation. Thus, the solution that I defend employs a notion of agent causation, though emphatically not in defence of an account of free will, as most theories (...) of agent causation are. Rather, I argue that the notion of agent causation introduced here best explains how it is that you are making your body move during an action, thereby providing a satisfactory solution to the problem of action. (shrink)
We can classify theories of consciousness along two dimensions. First, a theory might be physicalist or dualist. Second, a theory might endorse any of these three views regarding causal relations between phenomenal properties (properties that characterize states of our consciousness) and physical properties: nomism (the two kinds of property interact through deterministic laws), acausalism (they do not causally interact), and anomalism (they interact but not through deterministic laws). In this paper, I explore anomalous dualism, a combination of views that has (...) not previously been explored (as far as I know). I suggest that a kind of anomalous dualism, nonreductive anomalous panpsychism, promises to offer the best overall answer to two pressing issues for dualist views, the problem of mental causation and the mapping problem (the problem of predicting mind-body associations). (shrink)
Philosophers and cognitive scientists have worried that research on animal mind-reading faces a ‘logical problem’: the difficulty of experimentally determining whether animals represent mental states (e.g. seeing) or merely the observable evidence (e.g. line-of-gaze) for those mental states. The most impressive attempt to confront this problem has been mounted recently by Robert Lurz. However, Lurz' approach faces its own logical problem, revealing this challenge to be a special case of the more general problem of distal content. (...) Moreover, participants in this debate do not agree on criteria for representation. As such, future debate should either abandon the representational idiom or confront underlying semantic disagreements. (shrink)
Moral non-cognitivists hope to explain the nature of moral agreement and disagreement as agreement and disagreement in non-cognitive attitudes. In doing so, they take on the task of identifying the relevant attitudes, distinguishing the non-cognitive attitudes corresponding to judgements of moral wrongness, for example, from attitudes involved in aesthetic disapproval or the sports fan’s disapproval of her team’s performance. We begin this paper by showing that there is a simple recipe for generating apparent counterexamples to any informative specification of the (...) moral attitudes. This may appear to be a lethal objection to non-cognitivism, but a similar recipe challenges attempts by non-cognitivism’s competitors to specify the conditions underwriting the contrast between genuine and merely apparent moral disagreement. Because of its generality, this specification problem requires a systematic response, which, we argue, is most easily available for the non-cognitivist. Building on premisses congenial to the non-cognitivist tradition, we make the following claims: (1) In paradigmatic cases, wrongness-judgements constitute a certain complex but functionally unified state, and paradigmatic wrongness-judgements form a functional kind, preserved by homeostatic mechanisms. (2) Because of the practical function of such judgements, we should expect judges’ intuitive understanding of agreement and disagreement to be accommodating, treating states departing from the paradigm in various ways as wrongness-judgements. (3) This explains the intuitive judgements required by the counterexample-generating recipe, and more generally why various kinds of amoralists are seen as making genuine wrongness-judgements. (shrink)
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. (...) It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans. Although this seems to be a merely technical puzzle, even a triviality, a surprising range of proposed solutions has emerged, many of them mutually inconsistent. It is not even settled whether a solution should come from metaphysics, or from philosophy of language, or from logic. Here we survey the options, and provide several links to the many topics related to the Problem. (shrink)
The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of (...) the reference class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all “no-theory” theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a “metaphysical” and an “epistemological” reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains. (shrink)
The main goal of this paper is to investigate what explanatory resources Robert Brandom’s distinction between acknowledged and consequential commitments affords in relation to the problem of logical omniscience. With this distinction the importance of the doxastic perspective under consideration for the relationship between logic and norms of reasoning is emphasized, and it becomes possible to handle a number of problematic cases discussed in the literature without thereby incurring a commitment to revisionism about logic. One such case in particular (...) is the preface paradox, which will receive an extensive treatment. As we shall see, the problem of logical omniscience not only arises within theories based on deductive logic; but also within the recent paradigm shift in psychology of reasoning. So dealing with this problem is important not only for philosophical purposes but also from a psychological perspective. (shrink)
Barnett and Block (J Bus Ethics 18(2):179–194, 2011 ) argue that one cannot distinguish between deposits and loans due to the continuum problem of maturities and because future goods do not exist—both essential characteristics that distinguish deposit from loan contracts. In a similar way but leading to opposite conclusions (Cachanosky, forthcoming) maintains that both maturity mismatching and fractional reserve banking are ethically justified as these contracts are equivalent. We argue herein that the economic and legal differences between genuine deposit (...) and loan contracts are clear. This implies different legal obligations for these contracts, a necessary step in assessing the ethics of both fractional reserve banking and maturity mismatching. While the former is economically, legally, and perhaps most importantly ethically problematic, there are no such troubles with the latter. (shrink)
Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important (...) ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; moral and legal responsibility; and decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars. (shrink)
Many of us agree that we ought not to wrong future people, but there remains disagreement about which of our actions can wrong them. Can we wrong individuals whose lives are worth living by taking actions that result in their very existence? The problem of justifying an answer to this question has come to be known as the non-identity problem. While the literature contains an array of strategies for solving the problem, in this paper I will take (...) what I call the harm-based approach, and I will defend an account of harming—which I call the existence account of harming—that can vindicate this approach. -/- Roughly put, the harm-based approach holds that, by acting in ways that result in the existence of individuals whose lives are worth living, we can harm and thereby wrong those individuals. An initially plausible way to try to justify this approach is to endorse the non-comparative account of harming, which holds that an event harms an individual just in case it causes her to be in a bad state, such that the state’s badness does not derive from a comparison between that state and some alternative state that the individual would or could have been in. However, many philosophers argue that the non-comparative account of harming is inadequate, and one might be tempted to infer from this that any harm-based approach to the non-identity problem will fail. My proposal, which I call the existence account of harming, will show that this inference is faulty: we can vindicate the harm-based approach without relying on the non-comparative account of harming. (shrink)
Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what (...) is "everything" a body with a mind can do? Turing's original "pen-pal" version (the TT) only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT) calls for all of our linguistic and robotic capacities; immune to Searle's argument, it suggests how to ground a symbol manipulating system in the capacity to pick out the objects its symbols refer to. No Turing Test, however, can guarantee that a body has a mind. Worse, nothing in the explanation of its successful performance requires a model to have a mind at all. Minds are hence very different from the unobservables of physics (e.g., superstrings); and Turing Testing, though essential for machine-modeling the mind, can really only yield an explanation of the body. (shrink)
Proponents of the problem of animal suffering state that the great amount of animal death and suffering found in Earth’s natural history provides evidence against the truth of theism. In particular, philosophers such as Paul Draper have argued that regardless of the antecedent probability of theism and naturalism, animal suffering provides positive evidence for the truth of naturalism over theism. While theists have attempted to provide answers to the problem of animal suffering, almost none have argued that animal (...) suffering and death can be seen as positive evidence for theism. This essay will discuss several arguments from the writings of Thomas Aquinas that can be used to show that animal suffering and death are to be expected in theistic universes. In the first section, I discuss evidential arguments for naturalism from animal suffering. Next, I provide an overview of Aquinas’ arguments, particularly in Book II of the Summa Contra Gentiles. After this, I discuss the implications these arguments have for theistic universes. Finally, I conclude that these arguments refute evidential arguments for naturalism from animal suffering and also provide evidence that favors theism. (shrink)
In opposition to mainstream theory of mind approaches, some contemporary perceptual accounts of social cognition do not consider the central question of social cognition to be the problem of access to other minds. These perceptual accounts draw heavily on phenomenological philosophy and propose that others' mental states are “directly” given in the perception of the others' expressive behavior. Furthermore, these accounts contend that phenomenological insights into the nature of social perception lead to the dissolution of the access problem. (...) We argue, on the contrary, that the access problem is a genuine problem that must be addressed by any account of social cognition, perceptual or non-perceptual, because we cannot cast the access problem as a false problem without violating certain fundamental intuitions about other minds. We elaborate the fundamental intuitions as three constraints on any theory of social perception: the Immediacy constraint; the Transcendence constraint; and the Accessibility constraint. We conclude with an outline of an account of perceiving other minds that meets the three constraints. (shrink)
Elaborating on the notions that humans possess different modalities of decision-making and that these are often influenced by moral considerations, we conducted an experimental investigation of the Trolley Problem. We presented the participants with two standard scenarios (‹lever’ and ‹stranger’) either in the usual or in reversed order. We observe that responses to the lever scenario, which result from (moral) reasoning, are affected by our manipulation; whereas responses to the stranger scenario, triggered by moral emotions, are unaffected. Furthermore, when (...) asked to express general moral opinions on the themes of the Trolley Problem, about half of the participants reveal some inconsistency with the responses they had previously given. (shrink)
This paper explores the relationship between scepticism and epistemic relativism in the context of recent history and philosophy of science. More specifically, it seeks to show that significant treatments of epistemic relativism by influential figures in the history and philosophy of science draw upon the Pyrrhonian problem of the criterion. The paper begins with a presentation of the problem of the criterion as it occurs in the work of Sextus Empiricus. It is then shown that significant treatments of (...) epistemic relativism in recent history and philosophy of science (critical rationalism, historical philosophy of science and the strong programme) draw upon the problem of the criterion. It is briefly suggested that a particularist response to the problem of the criterion may be put to good use against epistemic relativism. (shrink)
Expressivists, such as Blackburn, analyse sentences such as 'S thinks that it ought to be the case that p' as S hoorays that p'. A problem is that the former sentence can be negated in three different ways, but the latter in only two. The distinction between refusing to accept a moral judgement and accepting its negation therefore cannot be accounted for. This is shown to undermine Blackburn's solution to the Frege-Geach problem.
Engineering ethics entails three frames of reference: individual, professional, and social. “Microethics” considers individuals and internal relations of the engineering profession; “macroethics” applies to the collective social responsibility of the profession and to societal decisions about technology. Most research and teaching in engineering ethics, including online resources, has had a “micro” focus. Mechanisms for incorporating macroethical perspectives include: integrating engineering ethics and science, technology and society (STS); closer integration of engineering ethics and computer ethics; and consideration of the influence of (...) professional engineering societies and corporate social responsiblity programs on ethical engineering practice. Integrating macroethical issues and concerns in engineering ethics involves broadening the context of ethical problem solving. This in turn implies: developing courses emphasizing both micro and macro perspectives, providing faculty development that includes training in both STS and practical ethics; and revision of curriculum materials, including online resources. Multidisciplinary collaboration is recommended 1) to create online case studies emphasizing ethical decision making in individual, professional, and societal contexts; 2) to leverage existing online computer ethics resources with relevance to engineering education and practice; and 3) to create transparent linkages between public policy positions advocated by professional societies and codes of ethics. (shrink)
In this paper, it is argued that there are (at least) two different kinds of ‘epistemic normativity’ in epistemology, which can be scrutinized and revealed by some comparison with some naturalistic studies of ethics. The first kind of epistemic normativity can be naturalized, but the other not. The doctrines of Quine’s naturalized epistemology is firstly introduced; then Kim’s critique of Quine’s proposal is examined. It is argued that Quine’s naturalized epistemology is able to save some room for the concept of (...) epistemic normativity and therefore his doctrine can be protected against Kim’s critique. But, it is the first kind of epistemic normativity that can be naturalized in epistemology. With the assistance of Goldman’s fake barn case, it is shown that the concept of epistemic normativity that is involved in the concept of knowing, which cannot be fully naturalized. The Gettier problem indicates that Quine only gets partially right idea concerning whether epistemology can (and should) be natualized. (shrink)
In his paper The Opposite of Human Enhancement: Nanotechnology and the Blind Chicken problem (Nanoethics 2:305–316, 2008) Paul Thompson argues that the possibility of disenhancing animals in order to improve animal welfare poses a philosophical conundrum. Although many people intuitively think such disenhancement would be morally impermissible, it’s difficult to find good arguments to support such intuitions. In this brief response to Thompson, I accept that there’s a conundrum here. But I argue that if we seriously consider whether creating (...) beings can harm or benefit them, and introduce the non-identity problem to discussions of animal disehancement, the conundrum is even deeper than Thompson suggests. (shrink)
In the last 20 years, a stream of research emerged under the label of „complex problem solving“ (CPS). This research was intended to describe the way people deal with complex, dynamic, and intransparent situations. Complex computer-simulated scenarios were as stimulus material in psychological experiments. This line of research lead to subtle insights into the way how people deal with complexity and uncertainty. Besides these knowledge-rich, realistic, intransparent, complex, dynamic scenarios with many variables, a second line of research used more (...) simple, knowledge-lean scenarios with a low number of variables („minimal complex systems“, MCS) that have been proposed recently in problem-solving research for the purpose of educational assessment. In both cases, the idea behind the use of microworlds is to increase validity of problem solving tasks by presenting interactive environments that can be explored and controlled by participants while pursuing certain action goals. The main argument presented here is: both types of systems - CPS and MCS – can only be dealt with successfully if causal dependencies between input and output variables are identified and used for system control. System knowledge is necessary for control and intervention. But CPS and MCS differ in their way of how causal dependencies are identified and how the mental model is constructed; therefore, they cannot be compared directly to each other with respect to the cognitive processes that are necessary for solving the tasks. Knowledge-poor MCS tasks address only a small fraction of the cognitive processes and structures needed for knowledge-rich CPS situations. (shrink)
A belief is stored if it is in no way before the subject’s mind. The problem of stored beliefs is that of satisfactorily explaining how the stored beliefs which seem justified are indeed justified. In this paper I challenge the two main internalist attempts to solve this problem. Internalism about epistemic justification, at a minimum, states that one’s mental life alone determines what one is justified in believing. First I dispute the attempt from epistemic conservatism, which states that (...) believing justifies retaining belief. Then I defend the attempt from dispositionalism, which assigns a justifying role to dispositions, from some key objections. But by drawing on cognitive psychological research I show that, for internalism, the problem of stored beliefs remains. (shrink)
In this paper, I argue that, just as the problem of unconceived alternatives provides a basis for a New Induction on the History of Science to the effect that a realist view of science is unwarranted, the problem of unconceived objections provides a basis for a New Induction on the History of Philosophy to the effect that a realist view of philosophy is unwarranted. I raise this problem not only for skepticism’s sake but also for the sake (...) of making a point about philosophical argumentation, namely, that anticipating objections to one’s claim is not the same as supporting one’s claim. In other words, defending p from objections does not amount to support or evidence for p. This, in turn, presents dialectical and pragma-dialectical approaches to argumentation with the following question: does proper argumentation require that arguers anticipate and respond to unconceived objections? (shrink)
The new evil demon problem is often considered to be a serious obstacle for externalist theories of epistemic justification. In this paper, I aim to show that the new evil demon problem also afflicts the two most prominent forms of internalism: moderate internalism and historical internalism. Since virtually all internalists accept at least one of these two forms, it follows that virtually all internalists face the NEDP. My secondary thesis is that many epistemologists face a dilemma. The only (...) form of internalism that is immune to the NEDP, strong internalism, is a very radical and revisionary view – a large number of epistemologists would have to significantly revise their views about justification in order to accept it. Hence, either epistemologists must accept a theory that is susceptible to the NEDP or accept a very radical and revisionary view. (shrink)
A difficulty is exposed in Allan Gibbard's solution to the embedding/Frege-Geach problem, namely that the difference between refusing to accept a normative judgement and accepting its negation is ignored. This is shown to undermine the whole solution.
In a formal theory of induction, inductive inferences are licensed by universal schemas. In a material theory of induction, inductive inferences are licensed by facts. With this change in the conception of the nature of induction, I argue that the celebrated “problem of induction” can no longer be set up and is thereby dissolved. Attempts to recreate the problem in the material theory of induction fail. They require relations of inductive support to conform to an unsustainable, hierarchical empiricism.
In some situations in which undesirable collective effects occur, it is very hard, if not impossible, to hold any individual reasonably responsible. Such a situation may be referred to as the problem of many hands. In this paper we investigate how the problem of many hands can best be understood and why, and when, it exactly constitutes a problem. After analyzing climate change as an example, we propose to define the problem of many hands as the (...) occurrence of a gap in the distribution of responsibility that may be considered morally problematic. Whether a gap is morally problematic, we suggest, depends on the reasons why responsibility is distributed. This, in turn, depends, at least in part, on the sense of responsibility employed, a main distinction being that between backward-looking and forward-looking responsibility. (shrink)
I argue that medieval solutions to the limit decision problem imply four-dimensionalism, i.e. the view according to which substances that persist through time are extended through time as well as through space, and have different temporal parts at different times.
I extend my direct virtue epistemology to explain how a knowledge-first framework can account for two kinds of positive epistemic standing, one tracked by externalists, who claim that the virtuous duplicate lacks justification, the other tracked by internalists, who claim that the virtuous duplicate has justification, and moreover that such justification is not enjoyed by the vicious duplicate. It also explains what these kinds of epistemic standing have to do with each other. I argue that all justified beliefs are good (...) candidates for knowledge, and are such because they are exercises of competences to know. However, there are two importantly different senses in which a belief may be a good candidate for knowledge, one corresponding to an externalist kind of justification and the other corresponding to an internalist one. I show how the account solves the new evil demon problem in a more satisfactory way than existing accounts. We end up with a view of knowledge, justification, and rationality that is plausible, motivated, and theoretically unified. (shrink)
Epistemic luck has been the focus of much discussion recently. Perhaps the most general knowledge-precluding type is veritic luck, where a belief is true but might easily have been false. Veritic luck has two sources, and so eliminating it requires two distinct conditions for a theory of knowledge. I argue that, when one sets out those conditions properly, a solution to the generality problem for reliabilism emerges.
Vogel argues that sensitivity accounts of knowledge are implausible because they entail that we cannot have any higher-level knowledge that our beliefs are true, not false. Becker and Salerno object that Vogel is mistaken because he does not formalize higher-level beliefs adequately. They claim that if formalized correctly, higher-level beliefs are sensitive, and can therefore constitute knowledge. However, these accounts do not consider the belief-forming method as sensitivity accounts require. If we take bootstrapping as the belief-forming method, as the discussed (...) cases suggest, then we face a generality problem. Our higher-level beliefs as formalized by Becker and Salerno turn out to be sensitive according to a wide reading of bootstrapping, but insensitive according to a narrow reading. This particular generality problem does not arise for the alternative accounts of process reliabilism and basis-relative safety. Hence, sensitivity accounts not only deliver opposite results given different formalizations of higher-level beliefs, but also for the same formalization, depending on how we interpret bootstrapping. Therefore, sensitivity accounts do not fail because they make higher-level knowledge impossible, as Vogel argues, and they do not succeed in allowing higher-level knowledge, as Becker and Salerno suggest. Rather, their problem is that they deliver far too heterogeneous results. (shrink)
In this paper, I argue that even if the Hard Problem of Content, as identified by Hutto and Myin, is important, it was already solved in natu- ralized semantics, and satisfactory solutions to the problem do not rely merely on the notion of information as covariance. I point out that Hutto and Myin have double standards for linguistic and mental representation, which leads to a peculiar inconsistency. Were they to apply the same standards to basic and linguistic minds, (...) they would either have to embrace representationalism or turn to semantic nihilism, which is, as I argue, an unstable and unattractive position. Hence, I conclude, their book does not offer an alternative to representation- alism. At the same time, it reminds us that representational talk in cognitive science cannot be taken for granted and that information is different from men- tal representation. Although this claim is not new, Hutto and Myin defend it forcefully and elegantly. (shrink)
A philosophical standard in the debates concerning material constitution is the case of a statue and a lump of clay, Goliath and Lumpl, respectively. According to the story, Lumpl and Goliath are coincident throughout their respective careers. Monists hold that they are identical; pluralists that they are distinct. This paper is concerned with a particular objection to pluralism, the Grounding Problem. The objection is roughly that the pluralist faces a legitimate explanatory demand to explain various differences she alleges between (...) Lumpl and Goliath, but that the pluralist’s theory lacks the resources to give any such explanation. In this paper, I explore the question of whether there really is any problem of this sort. I argue (i) that explanatory demands that are clearly legitimate are easy for the pluralist to meet; (ii) that even in cases of explanatory demands whose legitimacy is questionable the pluralist has some overlooked resources; and (iii) there is some reason for optimism about the pluralist’s prospects for meeting every legitimate explanatory demand. In short, no clearly adequate statement of a Grounding Problem is extant, and there is some reason to believe that the pluralist can overcome any Grounding Problem that we haven’t thought of yet. (shrink)
Many current popular views in epistemology require a belief to be the result of a reliable process (aka ‘method of belief formation’ or ‘cognitive capacity’) in order to count as knowledge. This means that the generality problem rears its head, i.e. the kind of process in question has to be spelt out, and this looks difficult to do without being either over or under-general. In response to this problem, I propose that we should adopt a more fine-grained account (...) of the epistemic basing relation, at which point the generality problem becomes easy to solve. (shrink)
Our pollution of the environment seems set to lead to widespread problems in the future, including disease, scarcity of resources, and bloody conflicts. It is natural to think that we are required to stop polluting because polluting harms the future individuals who will be faced with these problems. This natural thought faces Derek Parfit’s famous Non-Identity Problem ( 1984 , pp. 361–364). The people who live on the polluted earth would not have existed if we had not polluted. Our (...) polluting behaviour does not make these individuals worse off. It may therefore seem that we do not harm them by polluting. Parfit argues that we should replace person-affecting principles with an impersonal principle of beneficence, Principle Q ( 1984 , p. 360.). I argue that Principle Q cannot give an adequate account of our duties to refrain from polluting. I consider attempts to solve the Non-Identity Problem by denying that to harm someone an agent must make them worse off. I argue that such responses provide a partial solution to the Non-Identity Problem. They do show that we harm future individuals in a morally relevant sense by polluting. Nonetheless, this is only a partial solution. The Non-Identity Problem still suggests that our harm-based reasons not to pollute are less strong than we intuitively believe. Thus on its own an appeal to the claim that we harm future individuals is not able to give a fully satisfactory account of why we are required not to pollute. (shrink)