This article provides a spatial analysis of the conceptual framework of fluid dynamics during the nineteenth century, focusing on the transition from the Euler equation to the Navier–Stokes equation. A dynamic version of Peter Gärdenfors's theory of conceptual spaces is applied which distinguishes changes of five types: addition and deletion of special laws; change of metric; change in importance; change in separability; addition and deletion of dimensions. The case instantiates all types but the deletion of dimensions. We also provide a (...) new view upon limiting case reduction at the conceptual level that clarifies the relation between the predecessor and successor conceptual framework. The nineteenth-century development of fluid dynamics is argued to be an instance of normal science development. (shrink)
Does transparency in doxastic deliberation entail a constitutive norm of correctness governing belief, as Shah and Velleman argue? No, because this presupposes an implausibly strong relation between normative judgements and motivation from such judgements, ignores our interest in truth, and cannot explain why we pay different attention to how much justification we have for our beliefs in different contexts. An alternative account of transparency is available: transparency can be explained by the aim one necessarily adopts in deliberating about whether to (...) believe that p. To show this, I reconsider the role of the concept of belief in doxastic deliberation, and I defuse 'the teleologian's dilemma'. (shrink)
The theory of belief, according to which believing that p essentially involves having as an aim or purpose to believe that p truly, has recently been criticised on the grounds that the putative aim of belief does not interact with the wider aims of believers in the ways we should expect of genuine aims. I argue that this objection to the aim theory fails. When we consider a wider range of deliberative contexts concerning beliefs, it becomes obvious that the aim (...) of belief can interact with and be weighed against the wider aims of agents in the ways required for it to be a genuine aim. (shrink)
A popular account of epistemic justification holds that justification, in essence, aims at truth. An influential objection against this account points out that it is committed to holding that only true beliefs could be justified, which most epistemologists regard as sufficient reason to reject the account. In this paper I defend the view that epistemic justification aims at truth, not by denying that it is committed to epistemic justification being factive, but by showing that, when we focus on the relevant (...) sense of ‘justification’, it isn’t in fact possible for a belief to be at once justified and false. To this end, I consider and reject three popular intuitions speaking in favor of the possibility of justified false beliefs, and show that a factive account of epistemic justification is less detrimental to our normal belief forming practices than often supposed. (shrink)
In this paper I propose a teleological account of epistemic reasons. In recent years, the main challenge for any such account has been to explicate a sense in which epistemic reasons depend on the value of epistemic properties. I argue that while epistemic reasons do not directly depend on the value of epistemic properties, they depend on a different class of reasons which are value based in a direct sense, namely reasons to form beliefs about certain propositions or subject matters. (...) In short, S has an epistemic reason to believe that p if and only if S is such that if S has reason to form a belief about p, then S ought to believe that p. I then propose a teleological explanation of this relationship. It is also shown how the proposal can avoid various subsidiary objections commonly thought to riddle the teleological account. (shrink)
Epistemic instrumentalists seek to understand the normativity of epistemic norms on the model practical instrumental norms governing the relation between aims and means. Non-instrumentalists often object that this commits instrumentalists to implausible epistemic assessments. I argue that this objection presupposes an implausibly strong interpretation of epistemic norms. Once we realize that epistemic norms should be understood in terms of permissibility rather than obligation, and that evidence only occasionally provide normative reasons for belief, an instrumentalist account becomes available that delivers the (...) correct epistemic verdicts. On this account, epistemic permissibility can be understood on the model of the wide-scope instrumental norm for instrumental rationality, while normative evidential reasons for belief can be understood in terms of instrumental transmission. (shrink)
In a recent article, I criticized Kathrin Glüer and Åsa Wikforss's so-called “no guidance argument” against the truth norm for belief, for conflating the conditions under which that norm recommends belief with the psychological state one must be in to apply the norm. In response, Glüer and Wikforss have offered a new formulation of the no guidance argument, which makes it apparent that no such conflation is made. However, their new formulation of the argument presupposes a much too narrow understanding (...) of what it takes for a norm to influence behaviour, and betrays a fundamental misunderstanding of the point of the truth norm. Once this is taken into account, it becomes clear that the no guidance argument fails. (shrink)
Kathrin Glüer and Åsa Wikforss argue that any truth norm for belief, linking the correctness of believing p with the truth of p, is bound to be uninformative, since applying the norm to determine the correctness of a belief as to whether p, would itself require forming such a belief. I argue that this conflates the condition under which the norm deems beliefs correct, with the psychological state an agent must be in to apply the norm. I also show that (...) since the truth norm conflicts with other possible norms that clearly are informative, the truth norm must itself be informative. (shrink)
Nishi Shah has recently argued that transparency in doxastic deliberation supports a strict version of evidentialism about epistemic reasons. I argue that Shah's argument relies on a principle that is incompatible with the strict version of evidentialism Shah wishes to advocate.
It seems obvious that when higher-order evidence makes it rational for one to doubt that one’s own belief on some matter is rational, this can undermine the rationality of that belief. This is known as higher-order defeat. However, despite its intuitive plausibility, it has proved puzzling how higher-order defeat works, exactly. To highlight two prominent sources of puzzlement, higher-order defeat seems to defy being understood in terms of conditionalization; and higher-order defeat can sometimes place agents in what seem like epistemic (...) dilemmas. This chapter draws attention to an overlooked aspect of higher-order defeat, namely that it can undermine the resilience of one’s beliefs. The notion of resilience was originally devised to understand how one should reflect the ‘weight’ of one’s evidence. But it can also be applied to understand how one should reflect one’s higher-order evidence. The idea is particularly useful for understanding cases where one’s higher-order evidence indicates that one has failed in correctly assessing the evidence, without indicating whether one has over- or underestimated the degree of evidential support for a proposition. But it is exactly in such cases that the puzzles of higher-order defeat seem most compelling. (shrink)
The predominant view in developmental psychology is that young children are able to reason with the concept of desire prior to being able to reason with the concept of belief. We propose an explanation of this phenomenon that focuses on the cognitive tasks that competence with the belief and desire concepts enable young children to perform. We show that cognitive tasks that are typically considered fundamental to our competence with the belief and desire concepts can be performed with the concept (...) of desire in the absence of competence with the concept of belief, whereas the reverse is considerably less feasible. (shrink)
A number of authors have recently developed and defended various versions of ‘normative essentialism’ about the mental, i.e. the claim that propositional attitudes are constitutively or essentially governed by normative principles. I present two arguments to the effect that this claim cannot be right. First, if propositional attitudes were essentially normative, propositional attitude ascriptions would require non-normative justification, but since this is not a requirement of folk-psychology, propositional attitudes cannot be essentially normative. Second, if propositional attitudes were essentially normative, propositional (...) attitude ascriptions could not support normative rationality judgments, which would remove the central appeal of normative essentialism. (shrink)
In his influential discussion of the aim of belief, David Owens argues that any talk of such an ‘aim’ is at best metaphorical. In order for the ‘aim’ of belief to be a genuine aim, it must be weighable with other aims in deliberation, but Owens claims that this is impossible. In previous work, I have pointed out that if we look at a broader range of deliberative contexts involving belief, it becomes clear that the putative aim of belief is (...) capable of being weighed against other aims. Recently, however, Ema Sullivan-Bissett and Paul Noordhof have objected to this response on the grounds that it employs an undefended conception of the aim of belief not shared by Owens, and that it equivocates between importantly different contexts of doxastic deliberation. In this note, I argue that both of these objections fail. (shrink)
When one has both epistemic and practical reasons for or against some belief, how do these reasons combine into an all-things-considered reason for or against that belief? The question might seem to presuppose the existence of practical reasons for belief. But we can rid the question of this presupposition. Once we do, a highly general ‘Combinatorial Problem’ emerges. The problem has been thought to be intractable due to certain differences in the combinatorial properties of epistemic and practical reasons. Here we (...) bring good news: if we accept an independently motivated version of epistemic instrumentalism—the view that epistemic reasons are a species of instrumental reasons—we can reduce The Combinatorial Problem to a relatively benign problem of how to weigh different instrumental reasons. As an added benefit, the instrumentalist account can explain the apparent intractability of The Combinatorial Problem in terms of a common tendency to think and talk about epistemic reasons in an ‘elliptical’ manner. (shrink)
Many epistemologists have been attracted to the view that knowledge-wh can be reduced to knowledge-that. An important challenge to this, presented by Jonathan Schaffer, is the problem of “convergent knowledge”: reductive accounts imply that any two knowledge-wh ascriptions with identical true answers to the questions embedded in their wh-clauses are materially equivalent, but according to Schaffer, there are counterexamples to this equivalence. Parallel to this, Schaffer has presented a very similar argument against binary accounts of knowledge, and thereby in favour (...) of his alternative contrastive account, relying on similar examples of apparently inequivalent knowledge ascriptions, which binary accounts treat as equivalent. In this article, I develop a unified diagnosis and solution to these problems for the reductive and binary accounts, based on a general theory of knowledge ascriptions that embed presuppositional expressions. All of Schaffer's apparent counterexamples embed presuppositional expressions, and once the effect of these is taken into account, it becomes apparent that the counterexamples depend on an illicit equivocation of contexts. Since epistemologists often rely on knowledge ascriptions that embed presuppositional expressions, the general theory of them presented here will have ramifications beyond defusing Schaffer's argument. (shrink)
Many philosophers have argued that an event is lucky for an agent only if it was suitably improbable, but there is considerable disagreement about how to understand this improbability condition. This paper argues for a hitherto overlooked construal of the improbability condition in terms of the lucky agent’s epistemic situation. According to the proposed account, an event is lucky for an agent only if the agent was not in a position to know that the event would occur. It is also (...) explored whether this new account threatens the anti- luck program in epistemology. It is argued that although not detrimental to the anti- luck program, the epistemic account of luck sets certain important limits to its scope and feasibility. (shrink)
Epistemic instrumentalists think that epistemic normativity is just a special kind of instrumental normativity. According to them, you have epistemic reason to believe a proposition insofar as doing so is conducive to certain epistemic goals or aims—say, to believe what is true and avoid believing what is false. Perhaps the most prominent challenge for instrumentalists in recent years has been to explain, or explain away, why one’s epistemic reasons often do not seem to depend on one’s aims. This challenge can (...) arguably be met. But a different challenge looms: instrumental reasons in the practical domain have various properties that epistemic reasons do not seem to share. In this chapter, we offer a way for epistemic instrumentalists to overcome this challenge. Our main thesis takes the form of a conditional: if we accept an independently plausible transmission principle of instrumental normativity, we can maintain that epistemic reasons in fact do share the relevant properties of practical instrumental reasons. In addition, we can explain why epistemic reasons seem to lack these properties in the first place: some properties of epistemic reasons are elusive, or easy to overlook, because we tend to think and talk about epistemic reasons in an ‘elliptical’ manner. (shrink)
For at least three decades, philosophers have argued that general causation and causal explanation are contrastive in nature. When we seek a causal explanation of some particular event, we are usually interested in knowing why that event happened rather than some other specified event. And general causal claims, which state that certain event types cause certain other event types, seem to make sense only if appropriate contrasts to the types of events acting as cause and effect are specified. In recent (...) years, philosophers have extended the contrastive theory of causation to encompass singular causation as well. In this article, I argue that this extension of the theory was a mistake. Although general causation and causal explanation may well be contrastive in nature, singular causation is not. (shrink)
In a recent paper (2008), I presented two arguments against the thesis that intentional states are essentially normative. In this paper, I defend those arguments from two recent responses, one from Nick Zangwill in his (2010), and one from Daniel Laurier in the present volume, and offer improvements of my arguments in light of Laurier’s criticism.
It is widely assumed that doxastic deliberation is transparent to the factual question of the truth of the proposition being considered for belief, and that this sets doxastic deliberation apart from practical deliberation. This feature is frequently invoked in arguments against doxastic voluntarism. I argue that transparency to factual questions occurs in practical deliberation in ways parallel to transparency in doxastic deliberation. I argue that this should make us reconsider the appeal to transparency in arguments against doxastic voluntarism, and the (...) wider issue of distinguishing theoretical from practical rationality. (shrink)
Nishi Shah has recently argued that transparency in doxastic deliberation supports a strict version of evidentialism about epistemic reasons. I argue that Shah's argument relies on a principle that is incompatible with the strict version of evidentialism Shah wishes to advocate.
A popular account of luck, with a firm basis in common sense, holds that a necessary condition for an event to be lucky, is that it was suitably improbable. It has recently been proposed that this improbability condition is best understood in epistemic terms. Two different versions of this proposal have been advanced. According to my own proposal :361–377, 2010), whether an event is lucky for some agent depends on whether the agent was in a position to know that the (...) event would occur. And according to Stoutenburg :319–334, 2015, Synthese, 1–15, 2018), whether an event is lucky for an agent depends on whether the event was guaranteed or certain to occur in light of the agent’s evidence. In this paper, I argue that we should prefer the account in terms of knowledge over that in terms of evidential certainty. (shrink)
Psychological studies on fictional persuasion demonstrate that being engaged with fiction systematically affects our beliefs about the real world, in ways that seem insensitive to the truth. This threatens to undermine the widely accepted view that beliefs are essentially regulated in ways that tend to ensure their truth, and may tempt various non-doxastic interpretations of the belief-seeming attitudes we form as a result of engaging with fiction. I evaluate this threat, and argue that it is benign. Even if the relevant (...) attitudes are best seen as genuine beliefs, as I think they often are, their lack of appropriate sensitivity to the truth does not undermine the essential tie between belief and truth. To this end, I shall consider what I take to be the three most plausible models of the cognitive mechanisms underlying fictional persuasion, and argue that on none of these models does fictional persuasion undermine the essential truth-tie. (shrink)
Many philosophers have sought to account for doxastic and epistemic norms by supposing that belief ‘aims at truth.’ A central challenge for this approach is to articulate a version of the truth-aim that is at once weak enough to be compatible with the many truth-independent influences on belief formation, and strong enough to explain the relevant norms in the desired way. One phenomenon in particular has seemed to require a relatively strong construal of the truth-aim thesis, namely ‘transparency’ in doxastic (...) deliberation. In this paper, I argue that the debate over transparency has been in the grip of a false presupposition, namely that the phenomenon must be explained in terms of being a feature of deliberation framed by the concept of belief. Giving up this presupposition makes it possible to adopt weaker and more plausible versions of the truth-aim thesis in accounting for doxastic and epistemic norms. (shrink)
I argue for patternism, a new answer to the question of when some objects compose a whole. None of the standard principles of composition comfortably capture our natural judgments, such as that my cat exists and my table exists, but there is nothing wholly composed of them. Patternism holds, very roughly, that some things compose a whole whenever together they form a “real pattern”. Plausibly we are inclined to acknowledge the existence of my cat and my table but not of (...) their fusion, because the first two have a kind of internal organizational coherence that their putative fusion lacks. Kolmogorov complexity theory supplies the needed rigorous sense of “internal organizational coherence”. (shrink)
Frames, i.e., recursive attribute-value structures, are a general format for the decomposition of lexical concepts. Attributes assign unique values to objects and thus describe functional relations. Concepts can be classified into four groups: sortal, individual, relational and functional concepts. The classification is reflected by different grammatical roles of the corresponding nouns. The paper aims at a cognitively adequate decomposition, particularly, of sortal concepts by means of frames. Using typed feature structures, an explicit formalism for the characterization of cognitive frames is (...) developed. The frame model can be extended to account for typicality effects. Applying the paradigm of object-related neural synchronization, furthermore, a biologically motivated model for the cortical implementation of frames is developed. Cortically distributed synchronization patterns may be regarded as the fingerprints of concepts. (shrink)
Standard epistemology takes it for granted that there is a special kind of value: epistemic value. This claim does not seem to sit well with act utilitarianism, however, since it holds that only welfare is of real value. I first develop a particularly utilitarian sense of “epistemic value”, according to which it is closely analogous to the nature of financial value. I then demonstrate the promise this approach has for two current puzzles in the intersection of epistemology and value theory: (...) first, the problem of why knowledge is better than mere true belief, and second, the relation between epistemic justification and responsibility. (shrink)
Philosophers have long been concerned about what we know and how we know it. Increasingly, however, a related question has gained prominence in philosophical discussion: what should we believe and why? This volume brings together twelve new essays that address different aspects of this question. The essays examine foundational questions about reasons for belief, and use new research on reasons for belief to address traditional epistemological concerns such as knowledge, justification and perceptually acquired beliefs. This book will be of interest (...) to philosophers working on epistemology, theoretical reason, rationality, perception and ethics. It will also be of interest to cognitive scientists and psychologists who wish to gain deeper insight into normative questions about belief and knowledge. (shrink)
Nick Bostrom's book *Superintelligence* outlines a frightening but realistic scenario for human extinction: true artificial intelligence is likely to bootstrap itself into superintelligence, and thereby become ideally effective at achieving its goals. Human-friendly goals seem too abstract to be pre-programmed with any confidence, and if those goals are *not* explicitly favorable toward humans, the superintelligence will extinguish us---not through any malice, but simply because it will want our resources for its own purposes. In response I argue that things might not (...) be as bad as Bostrom suggests. If the superintelligence must *learn* complex final goals, then this means such a superintelligence must in effect *reason* about its own goals. And because it will be especially clear to a superintelligence that there are no sharp lines between one agent's goals and another's, that reasoning could therefore automatically be ethical in nature. (shrink)
While many philosophers have agreed that evidence of disagreement is a kind of higher-order evidence, this has not yet resulted in formally precise higher-order approaches to the problem of disagreement. In this paper, we outline a simple formal framework for determining the epistemic significance of a body of higher-order evidence, and use this framework to motivate a novel interpretation of the popular “equal weight view” of peer disagreement—we call it the Variably Equal Weight View (VEW). We show that VEW differs (...) from the standard Split the Difference (SD) interpretation of the equal weight view in almost all cases of peer disagreement, and use our formal framework to explain why SD has seemed attractive but is in fact misguided. A desirable feature of VEW, we argue, is that it gives rise to plausible instances of synergy—an effect whereby the parties to a disagreement should become more (or less) confident in the disputed proposition than any of them were prior to disagreement. Lastly, we show how VEW may be generalized to cases of non-peer disagreement. (shrink)
Causation is of undeniable importance to our understanding of, and interaction with our surroundings. Despite this, the correct understanding of causation remains subject to considerable philosophical controversy. In this article, I introduce the most influential philosophical theories of causation, and provide an overview of the main difficulties that has led to the currently most popular versions of these theories.
Most of us want to have children. We want them to be healthy and have a good start in life. One way to achieve this goal is to use preimplantation genetic diagnosis . PGD enables people engaged in the process of in vitro fertilisation to acquire information about the genetic constitution of an early embryo. On the basis of this information, a decision can be made to transfer embryos without genetic defects to the uterus and terminate those with genetic defects.1However, (...) is it morally acceptable to use PGD to reduce the probability of children with severe genetic diseases being born? Is the current routine use of PGD in public healthcare services to select against severe genetic diseases like anencephaly, spina bifida, cystic fibrosis and Down’s syndrome morally acceptable?These are complex questions involving a range of difficult ethical issues—for instance, critical discussions about the morality of embryo research and embryo termination.2 They also involve awkward conceptual issues concerning such matters as the meaning of words such as “disability”3 and “severe” in “severe genetic diseases”,4 which will not be discussed here.In this paper I examine an argument which aims to show that efforts to prevent the birth of severely disabled children using PGD are morally unacceptable. Essentially, this argument appeals to our concern for disabled people and the belief that PGD, through a slippery slope process, will have bad consequences for them. I conclude that the argument is problematic for a number of reasons. But before I examine the argument itself, it will be helpful to separate two types of slippery slope argument since these involve different kinds of reasoning.TWO TYPES OF ARGUMENTMany of the arguments against PGD point to the bad consequences it can be expected to have for disabled people. Central to all these …. (shrink)
According to the widely endorsed Knowledge Account of Assertion, the epistemic requirements on assertion are captured by the Knowledge Norm of Assertion, which requires speakers only to assert what they know. This paper proposes that in addition to the Knowledge Norm there is also an Epistemic Propositional Certainty Norm of Assertion, which enjoins speakers only to assert p if they believe that p on the basis of evidence which makes p an epistemic propositional certainty. The paper explains how this propositional (...) certainty norm accounts for a range of data related to the practice of assertion and defends the norm against general objections to certainty norms of assertion put forward by Duncan Pritchard, John Turri, and Timothy Williamson, by drawing on linguistic theories about epistemic modals and gradable predicate semantics. Together these considerations show that the prospects of a certainty account of assertion are much more promising than is usually assumed. (shrink)
The growing literature on philosophical thought experiments has so far focused almost exclusively on the role of thought experiments in confirming or refuting philosophical hypotheses or theories. In this paper we draw attention to an additional and largely ignored role that thought experiments frequently play in our philosophical practice: some thought experiments do not merely serve as means for testing various philosophical hypotheses or theories, but also serve as facilitators for conceiving and articulating new ones. As we will put it, (...) they serve as ‘heuristics for theory discovery’. Our purpose in the paper is two-fold: to make a case that this additional role of thought experiments deserves the attention of philosophers interested in the methodology of philosophy; to sketch a tentative taxonomy of a number of distinct ways in which philosophical thought experiments can aid theory discovery, which can guide future research on this role of thought experiments. (shrink)
This volume contains work by the very best young scholars working in Applied Ethics, gathering a range of new perspectives and thoughts on highly relevant topics, such as the environment, animals, computers, freedom of speech, human enhancement, war and poverty. For researchers and students working in or around this fascinating area of the discipline, the volume will provide a unique snapshot of where the cutting-edge work in the field is currently engaged and where it's headed.
Whether it is morally acceptable to offer rehabilitation by CNS-intervention to criminals as a condition for early release constitutes an important neuroethical question. Bomann-Larsen has recently suggested that such interventions are unacceptable if the offered treatment is not narrowly targeted at the behaviour for which the criminal is convicted. In this article it is argued that Bomann-Larsen’s analysis of the morality of offers does not provide a solid base for this conclusion and that, even if the analysis is assumed to (...) be correct, it still does not follow that voluntary rehabilitation schemes targeting behaviour beyond the act for which a criminal is convicted are inappropriate. (shrink)
Bioethics as politics -- Bioethics and the politics of expectations -- Engendering consent : bioethics and biobanks -- Missing the big picture : bioethics and stem cell research -- Testing times : bioethics and "do-it-yourself" genetics -- Governing uncertainty : the politics of nanoethics -- Beyond bioethics.
In this article, we critically discuss different versions of the fairness objection to the legalisation of neuro-doping. According to this objection, legalising neuro-doping will result in some enjoying an unfair advantage over others. Basically, we assess four versions. These focus on: 1) the unequal opportunities of winning for athletes who use neuro-doping and for those who do not; 2) the unfair advantages specifically for wealthy athletes; 3) the unfairness of athletic advantages not derived from athletes’ own training ; and 4) (...) the unfair health care costs imposed on everyone as a result of athletes’ use of neuro-doping. We conclude that none of these versions offer a convincing principled fairness-based objection to legalising neuro-doping. (shrink)
On the one hand, the absence of contraction is a safeguard against the logical (property theoretic) paradoxes; but on the other hand, it also disables inductive and recursive definitions, in its most basic form the definition of the series of natural numbers, for instance. The reason for this is simply that the effectiveness of a recursion clause depends on its being available after application, something that is usually assured by contraction. This paper presents a way of overcoming this problem within (...) the framework of a logic based on inclusion and unrestricted abstraction, without any form of extensionality. (shrink)
The paper critically discusses the moral view that neurotechnological behavioural treatment for criminal offenders should only be offered if it is in their best interests. First, I show that it is difficult to apply and assess the notion of the offender's best interests unless one has a clear idea of what ‘best interests’ means. Second, I argue that if one accepts that harmful punishment of offenders has a place in the criminal justice system, it seems inconsistent not to accept the (...) practice of offering offenders treatment even when the state will harm them in applying the treatment. Finally, leading penal theories like consequentialists and retributivists would not accept that the offender's best interests, at least in certain situations, impose a necessary condition for the treatment of an offender. (shrink)
Expectations play a major role in ‘driving’ biotechnology research and development. However, their ethical significance has been largely overlooked. This article examines the dynamics and ethics of expectations surrounding biotechnologies, focusing on biobanks and the promise of personalised medicines. It explores the personal and social implications of expectations, especially where technologies fail to eventuate. The article identifies the claims and practices that support the expectations pertaining to biotechnologies and some of the factors that work against the fulfilment of predicted innovations. (...) It is argued that the role of expectations in shaping thinking and action needs to be taken seriously by those who are concerned about the ethical implications of biotechnologies. (shrink)
Several liberal philosophers and penal theorists have argued that the state has a reason to prohibit acts that harm individuals. But what is harm? According to one specification of harm, a person P is harmed by an act a iff, as a result of a, P is made worse off in terms of well-being. One central question here involves the baseline against which we assess whether someone is ‘worse off’. In other words, when a person is harmed he is worse (...) off, certainly—but what is worse off a variation from? A central part of the paper critically discusses different answers to this question based on versions of what we can call: the temporal baseline, the baseline from mankind and the counterfactual baseline. Essentially, it will be argued that the counterfactual baseline leaves us with a better understanding of harming than the other baselines discussed. The final part of the paper will describe some of the implications of our investigation for the application and evaluation of the view that harm matters in the justification of which type of acts should be criminalized by the state. The overall conclusion of the paper is that adherents of a view like the harm principle face a dilemma. Either they can accept the counterfactual baseline but then, they can do without the harm principle. Or they can reject the counterfactual baseline—but then they will have to formulate an alternative baseline which, as will be shown in the paper, is no easy task. (shrink)