It is often claimed that the debate between presentism and eternalism is merely verbal, because when we use tensed, detensed or tenseless notions of existence, there is no difference in the accepted metaphysical statements between the adherents of both views. On the contrary, it is shown in this paper that when we express their positions making use, in accordance with intentions of the presentists and the eternalists, of the tensed notion of existence (in the case of the presentists) and the (...) detensed or tenseless notion (in the case of the eternalists), the controversy remains deep and very important for us, because both ontological claims express a different attitude to the existence of the flow of time. It is shown that not only does the proposed approach to presentism and eternalism exactly express the intentions of the adherents of both views but it also offers a better understanding of them joining together seemingly different theses maintained by the presentists and the eternalists, and explaining at the same time the dynamism of the presentists' ontology. The paper takes for granted that we should assess metaphysical theories in a similar way as we assess scientific theories, that is on the basis of their explanatory value. (shrink)
Moral contextualism is the view that claims like ‘A ought to X’ are implicitly relative to some (contextually variable) standard. This leads to a problem: what are fundamental moral claims like ‘You ought to maximize happiness’ relative to? If this claim is relative to a utilitarian standard, then its truth conditions are trivial: ‘Relative to utilitarianism, you ought to maximize happiness’. But it certainly doesn’t seem trivial that you ought to maximize happiness (utilitarianism is a highly controversial position). Some (...) people believe this problem is a reason to prefer a realist or error theoretic semantics of morals. I argue two things: first, that plausible versions of all these theories are afflicted by the problem equally, and second, that any solution available to the realist and error theorist is also available to the contextualist. So the problem of triviality does not favour noncontextualist views of moral language. (shrink)
The “brain in a vat” thought experiment is presented and refuted by appeal to the intuitiveness of what the author informally calls “the eye for an eye principle”, namely: Conscious mental states typically involved in sensory processes can conceivably successfully be brought about by direct stimulation of the brain, and in all such cases the utilized stimulus field will be in the relevant sense equivalent to the actual PNS or part of it thereof. In the second section, four classic problems (...) of Functionalism are given novel solutions based on the inclusion of peripheral nervous processes as constituents of mental states: The mad pain problem, the problem of pseudo-normal vision, the China-brain problem, and the trivialityproblem. (shrink)
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. (...) It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans. Although this seems to be a merely technical puzzle, even a triviality, a surprising range of proposed solutions has emerged, many of them mutually inconsistent. It is not even settled whether a solution should come from metaphysics, or from philosophy of language, or from logic. Here we survey the options, and provide several links to the many topics related to the Problem. (shrink)
This paper discusses and relates two puzzles for indicative conditionals: a puzzle about indeterminacy and a puzzle about triviality. Both puzzles arise because of Ramsey's Observation, which states that the probability of a conditional is equal to the conditional probability of its consequent given its antecedent. The puzzle of indeterminacy is the problem of reconciling this fact about conditionals with the fact that they seem to lack truth values at worlds where their antecedents are false. The puzzle of (...)triviality is the problem of reconciling Ramsey's Observation with various triviality proofs which establish that Ramsey's Observation cannot hold in full generality. In the paper, I argue for a solution to the indeterminacy puzzle and then apply the resulting theory to the triviality puzzle. On the theory I defend, the truth conditions of indicative conditionals are highly context dependent and such that an indicative conditional may be indeterminate in truth value at each possible world throughout some region of logical space and yet still have a nonzero probability throughout that region. (shrink)
Presentism is usually understood as the thesis that only the present exists whereas the rival theory of eternalism is usually understood as the thesis that past, present, and future things are all equally real. The significance of this debate has been threatened by the so-called triviality objection, which allegedly shows that the presentist thesis is either trivially true or obviously false: Presentism is trivially true if it is read as saying that everything that exists now is present, and it (...) is obviously false if read as saying that everything that has existed, exits or will exist is present. If eternalism is taken as the negation of presentism, it is also either trivially false or obviously true. In this paper, I try to respond to the triviality objection on behalf of presentism. In second section, I will examine how the argument proceeds. In third section, I will reflect on three possible ways to respond but will argue that none of them succeeds in giving a satisfactory solution. I will then try to clarify the core idea of presentism and to suggest that if we characterise presentism accurately, the problem will disappear. In fourth section, I will offer a plausible definition of presentism and will show how it can avoid the triviality objection and demonstrate why it is advantageous to accept the version of presentism I offer. (shrink)
Much of the literature on "ceteris paribus" laws is based on a misguided egalitarianism about the sciences. For example, it is commonly held that the special sciences are riddled with ceteris paribus laws; from this many commentators conclude that if the special sciences are not to be accorded a second class status, it must be ceteris paribus all the way down to fundamental physics. We argue that the (purported) laws of fundamental physics are not hedged by ceteris paribus clauses and (...) provisos. Furthermore, we show that not only is there no persuasive analysis of the truth conditions for ceteris paribus laws, there is not even an acceptable account of how they are to be saved from triviality or how they are to be melded with standard scientific methodology. Our way out of this unsatisfactory situation to reject the widespread notion that the achievements and the scientific status of the special sciences must be understood in terms of ceteris paribus laws. (shrink)
The classical theory of semantic information (ESI), as formulated by Bar-Hillel and Carnap in 1952, does not give a satisfactory account of the problem of what information, if any, analytically and/or logically true sentences have to offer. According to ESI, analytically true sentences lack informational content, and any two analytically equivalent sentences convey the same piece of information. This problem is connected with Cohen and Nagel's paradox of inference: Since the conclusion of a valid argument is contained in (...) the premises, it fails to provide any novel information. Again, ESI does not give a satisfactory account of the paradox. In this paper I propose a solution based on the distinction between empirical information and analytic information. Declarative sentences are informative due to their meanings. I construe meanings as structured hyperintensions, modelled in Transparent Intensional Logic as so-called constructions. These are abstract, algorithmically structured procedures whose constituents are sub-procedures. My main thesis is that constructions are the vehicles of information. Hence, although analytically true sentences provide no empirical information about the state of the world, they convey analytic information, in the shape of constructions prescribing how to arrive at the truths in question. Moreover, even though analytically equivalent sentences have equal empirical content, their analytic content may be different. Finally, though the empirical content of the conclusion of a valid argument is contained in the premises, its analytic content may be different from the analytic content of the premises and thus convey a new piece of information. (shrink)
I defend a formulation of the Ramsey Test with a condition for accepting negations of conditionals. It is implicit in the assumptions of the triviality theorems of Gärdenfors, Harper, and Lewis; and it allows for a unified proof of those theorems, from weaker assumptions about belief revision. This leads to a proof of McGee’s thesis that iterated conditionals do not obey modus ponens. †To contact the author, please write to: Institute of Philosophy, University of Leuven, Kardinaal Mercierplein 2, B‐3000 (...) Leuven, Belgium; e‐mail: firstname.lastname@example.org. (shrink)
The aim of the paper is to critically assess the idea that reasons for action are provided by desires. I start from the claim that the most often employed meta-ethical background for the Model is ethical naturalism; I then argue against the Model through its naturalist background. For the latter purpose I make use of two objections that are both intended to refute naturalism per se. One is G.E. Moore’s Open Question Argument, the other is Derek Parfit’s Triviality Objection. (...) I show that naturalists might be able to avoid both objections if they can vindicate the reduction proposed. This, however, leads to further conditions whose fulfillment is necessary for the success of the vindication. I deal with one such condition, which I borrow from Peter Railton and Mark Schroeder:the demand that naturalist reductions must be tolerably revisionist. In the remainder of the paper I argue that the most influential versions of the Model are intolerably revisionist. The first problem concerns the picture of reasons that many recent formulations of the Model advocate. By using an objection from Michael Bedke, I show that on this interpretation obvious reasons won’t be accounted for by the Model. The second problem concerns the idealization that is also often part of the Model. Invoking an argument of Connie Rosati’s, I show that the best form of idealization, the ideal advisor account, is inadequate. Hence, though not the knock down arguments they were intended to be, OQA and TO do pose a serious threat to the Model. (shrink)
"THE central problem in moral philosophy is commonly known as the is-ought problem." So runs the opening sentence of the introduction to a recent volume of readings on this issue.  Taken as a statement about the preoccupations of moral philosophers of the present century, we can accept this assertion. The problem of how statements of fact are related to moral judgments has dominated recent moral philosophy. Associated with this problem is another, which has also been (...) given considerable attention - the question of how morality is to be defined. The two issues are linked, since some definitions of morality allow us to move from statements of fact to moral judgments, while others do not. In this article I shall take the two issues together, and try to show that they do not merit the amount of attention they have been given. I shall argue that the differences between the contending parties are terminological, and that there are various possible terminologies, none of which has, on balance, any great advantage over any other terminology. So instead of continuing to regard these issues as central, moral philosophers could, I believe, "agree to disagree" about the "is-ought" problem, and about the definition of morality, provided only that everyone was careful to stipulate how he was using the term "moral" and was aware of the implications and limitations of the definition he was using. Moral philosophers could then move on to consider more important issues. (shrink)
According to physicalism, everything is physical, namely there are no entities (or no more restricted sorts of entities) that are not physical. In this paper, I shall examine the truth of this thesis by presenting a triviality objection against physicalism that is somehow similar to the one advanced against presentism. Firstly, I shall distinguish between two different definitions of the physical (roughly, every entity is physical-1 iff it has some feature F, such as impenetrability or exact spatio-temporal location, while (...) every entity is physical-2 iff it is accepted by some ideal, true and complete physical theory) and between unrestricted and restricted versions of physicalism (according to the former ones, physicalism is true for every entity while, according to the latter ones, it is true only with regard to some restricted domain of entities). Secondly, I shall argue that physicalists have to deal with six different problems: the triviality of some versions of physicalism, the content-indeterminacy of the physical and the justification of the “faith” according to which we will formulate some ideal, true and complete physical theory (given the definition of the physical-2), the restricted domain problem (so that restricted versions of physicalism seem not to exclude the existence of seemingly non-physical entities), the (possible and plausible) incompatibility between the two different definitions of the physical, the extension of the physical investigation problem. (shrink)
I build a case for the impossibility of natural necessity as anything other than a species of metaphysical necessity – the necessity obtaining in virtue of the essences of natural objects. Aristotelian necessitarianism about the laws of nature is clarified and defended. I contrast it with E.J. Lowe’s contingentism about the laws. I examine Lowe’s solution to the circularity/trivialityproblem besetting natural necessity understood as relative necessity. Lowe’s way out is subject to serious problems unless it is given (...) an essentialist turn, which he declines to do. Further, his defence of contingency in terms of possible variation in the natural constants is found wanting, as is a related defence given by Kit Fine. I examine and raise problems for a recent, Lowe-inspired defence of a hybrid view of the modal status of laws given by Tuomas Tahko. Aristotelian necessitarianism can account for the sorts of phenomena to which contingentists typically appeal. (shrink)
As for most measurement procedures in the course of their development, measures of consciousness face the problem of coordination, i.e., the problem of knowing whether a measurement procedure actually measures what it is intended to measure. I focus on the case of the Perceptual Awareness Scale to illustrate how ignoring this problem leads to ambiguous interpretations of subjective reports in consciousness science. In turn, I show that empirical results based on this measurement procedure might be systematically misinterpreted.
The “demarcation problem,” the issue of how to separate science from pseu- doscience, has been around since fall 1919—at least according to Karl Pop- per’s (1957) recollection of when he first started thinking about it. In Popper’s mind, the demarcation problem was intimately linked with one of the most vexing issues in philosophy of science, David Hume’s problem of induction (Vickers 2010) and, in particular, Hume’s contention that induction cannot be logically justified by appealing to the fact (...) that “it works,” as that in itself is an inductive argument, thereby potentially plunging the philosopher straight into the abyss of a viciously circular argument. (shrink)
One of the reasons why most of us feel puzzled about the problem of abortion is that we want, and do not want, to allow to the unborn child the rights that belong to adults and children. When we think of a baby about to be born it seems absurd to think that the next few minutes or even hours could make so radical a difference to its status; yet as we go back in the life of the fetus (...) we are more and more reluctant to say that this is a human being and must be treated as such. No doubt this is the deepest source of our dilemma, but it is not the only one. For we are also confused about the general question of what we may and may not do where the interests of human beings conflict. We have strong intuitions about certain cases; saying, for instance, that it is all right to raise the level of education in our country, though statistics allow us to predict that a rise in the suicide rate will follow, while it is not all right to kill the feeble-minded to aid cancer research. It is not easy, however, to see the principles involved, and one way of throwing light on the abortion issue will be by setting up parallels involving adults or children once born. So we will be able to isolate the “equal rights” issue and should be able to make some advance... (shrink)
I resolve the major challenge to an Expressivist theory of the meaning of normative discourse: the Frege–Geach Problem. Drawing on considerations from the semantics of directive language (e.g., imperatives), I argue that, although certain forms of Expressivism (like Gibbard’s) do run into at least one version of the Problem, it is reasonably clear that there is a version of Expressivism that does not.
Ever since Socrates, philosophers have been in the business of asking ques- tions of the type “What is X?” The point has not always been to actually find out what X is, but rather to explore how we think about X, to bring up to the surface wrong ways of thinking about it, and hopefully in the process to achieve an increasingly better understanding of the matter at hand. In the early part of the twentieth century one of the most (...) ambitious philosophers of sci- ence, Karl Popper, asked that very question in the specific case in which X = science. Popper termed this the “demarcation problem,” the quest for what distinguishes science from nonscience and pseudoscience (and, presumably, also the latter two from each other). (shrink)
J.L. Mackie’s version of the logical problem of evil is a failure, as even he came to recognize. Contrary to current mythology, however, its failure was not established by Alvin Plantinga’s Free Will Defense. That’s because a defense is successful only if it is not reasonable to refrain from believing any of the claims that constitute it, but it is reasonable to refrain from believing the central claim of Plantinga’s Free Will Defense, namely the claim that, possibly, every essence (...) suffers from transworld depravity. (shrink)
The philosophical study of consciousness is chock full of thought experiments: John Searle’s Chinese Room, David Chalmers’ Philosophical Zombies, Frank Jackson’s Mary’s Room, and Thomas Nagel’s ‘What is it like to be a bat?’ among others. Many of these experiments and the endless discussions that follow them are predicated on what Chalmers famously referred as the ‘hard’ problem of consciousness: for him, it is ‘easy’ to figure out how the brain is capable of perception, information integration, attention, reporting on (...) mental states, etc, even though this is far from being accomplished at the moment. What is ‘hard’, claims the man of the p-zombies, is to account for phenomenal experience, or what philosophers usually call ‘qualia’: the ‘what is it like’, first-person quality of consciousness. (shrink)
Here I discuss some theistic responses to the problem of animal pain and suffering with special attention to Michael Murray’s presentation in Nature Red in Tooth and Claw. The neo-Cartesian defenses he describes are reviewed, along with the appeal to nomic regularity and Murray’s emphasis on the progression of the universe from chaos to order. It is argued that despite these efforts to prove otherwise the problem of animal suffering remains a serious threat to the belief that an (...) all-powerful, all-knowing, and all-good creator exists. (shrink)
Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important (...) ways in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; moral and legal responsibility; and decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars. (shrink)
In this paper, I argue that there is a kind of evil, namely, the unequal distribution of natural endowments, or natural inequality, which presents theists with a new evidential problem of evil. The problem of natural inequality is a new evidential problem of evil not only because, to the best of my knowledge, it has not yet been discussed in the literature, but also because available theodicies, such the free will defense and the soul-making defense, are not (...) adequate responses in the face of this particular evil, or so I argue. (shrink)
The prospective introduction of autonomous cars into public traffic raises the question of how such systems should behave when an accident is inevitable. Due to concerns with self-interest and liberal legitimacy that have become paramount in the emerging debate, a contractarian framework seems to provide a particularly attractive means of approaching this problem. We examine one such attempt, which derives a harm minimisation rule from the assumptions of rational self-interest and ignorance of one’s position in a future accident. We (...) contend, however, that both contractarian approaches and harm minimisation standards are flawed, due to a failure to account for the fundamental difference between those ‘involved’ and ‘uninvolved’ in an impending crash. Drawing from classical works on the trolley problem, we show how this notion can be substantiated by reference to either the distinction between negative and positive rights, or to differences in people’s claims. By supplementing harm minimisation with corresponding constraints, we can develop crash algorithms for autonomous cars which are both ethically adequate and promise to overcome certain significant practical barriers to implementation. (shrink)
Philosophers and cognitive scientists have worried that research on animal mind-reading faces a ‘logical problem’: the difficulty of experimentally determining whether animals represent mental states (e.g. seeing) or merely the observable evidence (e.g. line-of-gaze) for those mental states. The most impressive attempt to confront this problem has been mounted recently by Robert Lurz. However, Lurz' approach faces its own logical problem, revealing this challenge to be a special case of the more general problem of distal content. (...) Moreover, participants in this debate do not agree on criteria for representation. As such, future debate should either abandon the representational idiom or confront underlying semantic disagreements. (shrink)
I present two Triviality results for Kratzer's standard “restrictor” analysis of indicative conditionals. I both refine and undermine the common claim that problems of Triviality do not arise for Kratzer conditionals since they are not strictly conditionals at all.
Moral non-cognitivists hope to explain the nature of moral agreement and disagreement as agreement and disagreement in non-cognitive attitudes. In doing so, they take on the task of identifying the relevant attitudes, distinguishing the non-cognitive attitudes corresponding to judgements of moral wrongness, for example, from attitudes involved in aesthetic disapproval or the sports fan’s disapproval of her team’s performance. We begin this paper by showing that there is a simple recipe for generating apparent counterexamples to any informative specification of the (...) moral attitudes. This may appear to be a lethal objection to non-cognitivism, but a similar recipe challenges attempts by non-cognitivism’s competitors to specify the conditions underwriting the contrast between genuine and merely apparent moral disagreement. Because of its generality, this specification problem requires a systematic response, which, we argue, is most easily available for the non-cognitivist. Building on premisses congenial to the non-cognitivist tradition, we make the following claims: (1) In paradigmatic cases, wrongness-judgements constitute a certain complex but functionally unified state, and paradigmatic wrongness-judgements form a functional kind, preserved by homeostatic mechanisms. (2) Because of the practical function of such judgements, we should expect judges’ intuitive understanding of agreement and disagreement to be accommodating, treating states departing from the paradigm in various ways as wrongness-judgements. (3) This explains the intuitive judgements required by the counterexample-generating recipe, and more generally why various kinds of amoralists are seen as making genuine wrongness-judgements. (shrink)
The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of (...) the reference class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all “no-theory” theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a “metaphysical” and an “epistemological” reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains. (shrink)
This is an opinionated overview of the Frege-Geach problem, in both its historical and contemporary guises. Covers Higher-order Attitude approaches, Tree-tying, Gibbard-style solutions, and Schroeder's recent A-type expressivist solution.
Computer-simulated scenarios have been part of psychological research on problem solving for more than 40 years. The shift in emphasis from simple toy problems to complex, more real-life oriented problems has been accompanied by discussions about the best ways to assess the process of solving complex problems. Psychometric issues such as reliable assessments and addressing correlations with other instruments have been in the foreground of these discussions and have left the content validity of complex problem solving in the (...) background. In this paper, we return the focus to content issues and address the important features that define complex problems. (shrink)
The new evil demon problem is often considered to be a serious obstacle for externalist theories of epistemic justification. In this paper, I aim to show that the new evil demon problem also afflicts the two most prominent forms of internalism: moderate internalism and historical internalism. Since virtually all internalists accept at least one of these two forms, it follows that virtually all internalists face the NEDP. My secondary thesis is that many epistemologists face a dilemma. The only (...) form of internalism that is immune to the NEDP, strong internalism, is a very radical and revisionary view – a large number of epistemologists would have to significantly revise their views about justification in order to accept it. Hence, either epistemologists must accept a theory that is susceptible to the NEDP or accept a very radical and revisionary view. (shrink)
In this paper, I argue that even if the Hard Problem of Content, as identified by Hutto and Myin, is important, it was already solved in natu- ralized semantics, and satisfactory solutions to the problem do not rely merely on the notion of information as covariance. I point out that Hutto and Myin have double standards for linguistic and mental representation, which leads to a peculiar inconsistency. Were they to apply the same standards to basic and linguistic minds, (...) they would either have to embrace representationalism or turn to semantic nihilism, which is, as I argue, an unstable and unattractive position. Hence, I conclude, their book does not offer an alternative to representation- alism. At the same time, it reminds us that representational talk in cognitive science cannot be taken for granted and that information is different from men- tal representation. Although this claim is not new, Hutto and Myin defend it forcefully and elegantly. (shrink)
In the last 20 years, a stream of research emerged under the label of „complex problem solving“ (CPS). This research was intended to describe the way people deal with complex, dynamic, and intransparent situations. Complex computer-simulated scenarios were as stimulus material in psychological experiments. This line of research lead to subtle insights into the way how people deal with complexity and uncertainty. Besides these knowledge-rich, realistic, intransparent, complex, dynamic scenarios with many variables, a second line of research used more (...) simple, knowledge-lean scenarios with a low number of variables („minimal complex systems“, MCS) that have been proposed recently in problem-solving research for the purpose of educational assessment. In both cases, the idea behind the use of microworlds is to increase validity of problem solving tasks by presenting interactive environments that can be explored and controlled by participants while pursuing certain action goals. The main argument presented here is: both types of systems - CPS and MCS – can only be dealt with successfully if causal dependencies between input and output variables are identified and used for system control. System knowledge is necessary for control and intervention. But CPS and MCS differ in their way of how causal dependencies are identified and how the mental model is constructed; therefore, they cannot be compared directly to each other with respect to the cognitive processes that are necessary for solving the tasks. Knowledge-poor MCS tasks address only a small fraction of the cognitive processes and structures needed for knowledge-rich CPS situations. (shrink)
Inquiry into the meaning of logical terms in natural language (‘and’, ‘or’, ‘not’, ‘if’) has generally proceeded along two dimensions. On the one hand, semantic theories aim to predict native speaker intuitions about the natural language sentences involving those logical terms. On the other hand, logical theories explore the formal properties of the translations of those terms into formal languages. Sometimes, these two lines of inquiry appear to be in tension: for instance, our best logical investigation into conditional connectives may (...) show that there is no conditional operator that has all the properties native speaker intuitions suggest if has. Indicative conditionals have famously been the source of one such tension, ever since the triviality proofs of both Lewis (1976) and Gibbard (1981) established conclusions which are in prima facie tension with ordinary judgments about natural language indicative conditionals. In a recent series of papers, Branden Fitelson has strengthened both triviality results (Fitelson 2013, 2015, 2016), revealing a common culprit: a logical schema known as IMPORT-EXPORT. Fitelson’s results focus the tension between the logical results and ordinary judgments, since IMPORT-EXPORT seems to be supported by intuitions about natural language. In this paper, we argue that the intuitions which have been taken to support IMPORT-EXPORT are really evidence for a closely related, but subtly different, principle. We show that the two principles are independent by showing how, given a standard assumption about the conditional operator in the formal language in which IMPORT-EXPORT is stated, many existing theories of indicative conditionals validate one, but not the other. Moreover, we argue that once we clearly distinguish these principles, we can use propositional anaphora to show that IMPORT-EXPORT is in fact not valid for natural language indicative conditionals (given this assumption about the formal conditional operator). This gives us a principled and independently motivated way of rejecting a crucial premise in many triviality results, while still making sense of the speaker intuitions which appeared to motivate that premise. We suggest that this strategy has broad application and an important lesson: in theorizing about the logic of natural language, we must pay careful attention to the translation between the formal languages in which logical results are typically proved, and natural languages which are the subject matter of semantic theory. (shrink)
My primary aim is to defend a nonreductive solution to the problem of action. I argue that when you are performing an overt bodily action, you are playing an irreducible causal role in bringing about, sustaining, and controlling the movements of your body, a causal role best understood as an instance of agent causation. Thus, the solution that I defend employs a notion of agent causation, though emphatically not in defence of an account of free will, as most theories (...) of agent causation are. Rather, I argue that the notion of agent causation introduced here best explains how it is that you are making your body move during an action, thereby providing a satisfactory solution to the problem of action. (shrink)
According to Intellectualism knowing how to V is a matter of knowing a suitable proposition about a way of V-ing. In this paper, I consider the question of which ways of acting might figure in the propositions which Intellectualists claim constitute the object of knowledge-how. I argue that Intellectualists face a version of the Generality Problem – familiar from discussions of Reliabilism – since not all ways of V-ing are such that knowledge about them suffices for knowledge-how. I consider (...) various responses to this problem, and argue that none are satisfactory. (shrink)
We can classify theories of consciousness along two dimensions. First, a theory might be physicalist or dualist. Second, a theory might endorse any of these three views regarding causal relations between phenomenal properties (properties that characterize states of our consciousness) and physical properties: nomism (the two kinds of property interact through deterministic laws), acausalism (they do not causally interact), and anomalism (they interact but not through deterministic laws). In this paper, I explore anomalous dualism, a combination of views that has (...) not previously been explored (as far as I know). I suggest that a kind of anomalous dualism, nonreductive anomalous panpsychism, promises to offer the best overall answer to two pressing issues for dualist views, the problem of mental causation and the mapping problem (the problem of predicting mind-body associations). (shrink)
Panpsychism, the view that microphysical entities have phenomenal experiences that constitute the phenomenal experiences of macrophysical entities, seems to be committed to various sorts of mental combination: it seems that experiences, subjects, and phenomenal characters would have to mentally combine in order to yield experiences such as our own. The combination problem for panpsychism is that of explaining precisely how the required forms of mental combination occur. This paper argues that, given a few plausible assumptions, the panpsychistβs combination problems (...) are not different in kind from other combination problems that are problems for everyone: the problem of phenomenal unity, the problem of mental structure, and the problem of explaining how we can have experiences in new quality spaces. Understanding mental combination poses a significant challenge to understanding the mind, and it is a problem for everyone. (shrink)
Barnett and Block (J Bus Ethics 18(2):179–194, 2011 ) argue that one cannot distinguish between deposits and loans due to the continuum problem of maturities and because future goods do not exist—both essential characteristics that distinguish deposit from loan contracts. In a similar way but leading to opposite conclusions (Cachanosky, forthcoming) maintains that both maturity mismatching and fractional reserve banking are ethically justified as these contracts are equivalent. We argue herein that the economic and legal differences between genuine deposit (...) and loan contracts are clear. This implies different legal obligations for these contracts, a necessary step in assessing the ethics of both fractional reserve banking and maturity mismatching. While the former is economically, legally, and perhaps most importantly ethically problematic, there are no such troubles with the latter. (shrink)
Many of us agree that we ought not to wrong future people, but there remains disagreement about which of our actions can wrong them. Can we wrong individuals whose lives are worth living by taking actions that result in their very existence? The problem of justifying an answer to this question has come to be known as the non-identity problem. While the literature contains an array of strategies for solving the problem, in this paper I will take (...) what I call the harm-based approach, and I will defend an account of harming—which I call the existence account of harming—that can vindicate this approach. -/- Roughly put, the harm-based approach holds that, by acting in ways that result in the existence of individuals whose lives are worth living, we can harm and thereby wrong those individuals. An initially plausible way to try to justify this approach is to endorse the non-comparative account of harming, which holds that an event harms an individual just in case it causes her to be in a bad state, such that the state’s badness does not derive from a comparison between that state and some alternative state that the individual would or could have been in. However, many philosophers argue that the non-comparative account of harming is inadequate, and one might be tempted to infer from this that any harm-based approach to the non-identity problem will fail. My proposal, which I call the existence account of harming, will show that this inference is faulty: we can vindicate the harm-based approach without relying on the non-comparative account of harming. (shrink)
Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what (...) is "everything" a body with a mind can do? Turing's original "pen-pal" version (the TT) only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT) calls for all of our linguistic and robotic capacities; immune to Searle's argument, it suggests how to ground a symbol manipulating system in the capacity to pick out the objects its symbols refer to. No Turing Test, however, can guarantee that a body has a mind. Worse, nothing in the explanation of its successful performance requires a model to have a mind at all. Minds are hence very different from the unobservables of physics (e.g., superstrings); and Turing Testing, though essential for machine-modeling the mind, can really only yield an explanation of the body. (shrink)
Expressivists, such as Blackburn, analyse sentences such as 'S thinks that it ought to be the case that p' as S hoorays that p'. A problem is that the former sentence can be negated in three different ways, but the latter in only two. The distinction between refusing to accept a moral judgement and accepting its negation therefore cannot be accounted for. This is shown to undermine Blackburn's solution to the Frege-Geach problem.
A philosophical standard in the debates concerning material constitution is the case of a statue and a lump of clay, Goliath and Lumpl, respectively. According to the story, Lumpl and Goliath are coincident throughout their respective careers. Monists hold that they are identical; pluralists that they are distinct. This paper is concerned with a particular objection to pluralism, the Grounding Problem. The objection is roughly that the pluralist faces a legitimate explanatory demand to explain various differences she alleges between (...) Lumpl and Goliath, but that the pluralist’s theory lacks the resources to give any such explanation. In this paper, I explore the question of whether there really is any problem of this sort. I argue (i) that explanatory demands that are clearly legitimate are easy for the pluralist to meet; (ii) that even in cases of explanatory demands whose legitimacy is questionable the pluralist has some overlooked resources; and (iii) there is some reason for optimism about the pluralist’s prospects for meeting every legitimate explanatory demand. In short, no clearly adequate statement of a Grounding Problem is extant, and there is some reason to believe that the pluralist can overcome any Grounding Problem that we haven’t thought of yet. (shrink)
This is a reply to Chris Tweed's recent attempt to solve the problem of "nearly convergent knowledge" and thus defend a binary account of knowledge against a contrastivist alternative. Ingenuous as his proposal is, it still does not solve the problem.