The paper focuses on the problem of anthropodicy in the philosophical system of Hermann Cohen and its interpretation by Jacob Gordin (1896—1947). Gordin was one of the last followers of Cohen in Russia. He developes his interpretation in the lecture “Anthropodicy”, which was given in the Philosophical Circle at the Petrograd University in December 1921. For the study of the problem of anthropodicy he was apparently inspired by the discussions at the Free Philosophical Association in 1919—1921. Gordin places (...)Cohen’s concept of man in the wide intellectual context given by the ideas of the Russian religious philosophy, German classical philosophy, Neo-Kantianism, and the West European and Jewish mysticism (cabbala). Gordin compares Cohen’s anthropodicy with Vladimir Soloviev’s one and shows that there is a similarity in their approaches. Both philosophers point out that the justification of man is possible only in form of the justification of humanity and not as for Berdyaiev in form of the justification of personality. But Gordin uses Soloviev’s concept of all-unity and Berdyaiev’s concept of creativity in order to “improve” Cohen’s conception and to reveal the contribution of a person to the justification of humanity. Stronger as Cohen Gordin connects the programm of anthropodicy with individuality and underlines the participation of the individual in creating culture. (shrink)
In 2004 a survey was conducted in the member states of the European Union designed to gain greater insight into the views on control strategies for foot and mouth disease, classical swine fever, and avian influenza with respect to the epidemiological, economic and social-ethical consequences of each of these animal diseases. This article presents the results of the social-ethical survey. A selection of stakeholders from each member state was asked to prioritize issues for the prevention and control of these diseases. (...) A majority of stakeholders chose preventive measures as the preferred issue. An analysis was done to determine whether there were differences in views expressed by stakeholders from member states with a history of recent epidemics and ones without such a history, and whether there were regional differences. There were no differences between member states with or without a history of recent epidemics. There were indeed regional differences between the priority orders from Northern and Southern Europe on the one hand, and from Eastern Europe on the other. (shrink)
In this paper, we present and defend the theoretical framework of an empirical model to describe people’s fundamental moral attitudes (FMAs) to animals, the stratification of FMAs in society and the role of FMAs in judgment on the culling of healthy animals in an animal disease epidemic. We used philosophical animal ethics theories to understand the moral basis of FMA convictions. Moreover, these theories provide us with a moral language for communication between animal ethics, FMAs, and public debates. We defend (...) that FMA is a two-layered concept. The first layer consists of deeply felt convictions about animals. The second layer consists of convictions derived from the first layer to serve as arguments in a debate on animal issues. In a debate, the latter convictions are variable, depending on the animal issue in a specific context, time, and place. This variability facilitates finding common ground in an animal issue between actors with opposing convictions. (shrink)
This article seeks to shed light on civil commitment in the context of the opioid crisis, to sketch the existing legal landscape surrounding civil commitment, and to illustrate the relevant medical, ethical, and legal concerns that policymakers must take into account as they struggle to find appropriate responses to the crisis.
European animal disease policy seems to find its justification in a “harm to other” principle. Limiting the freedom of animal keepers—e.g., by culling their animals—is justified by the aim to prevent harm, i.e., the spreading of the disease. The picture, however, is more complicated. Both during the control of outbreaks and in the prevention of notifiable, animal diseases the government is confronted with conflicting claims of stakeholders who anticipate running a risk to be harmed by each other, and who ask (...) for government intervention. In this paper, we first argue that in a policy that aims to prevent animal diseases, the focus shifts from limiting “harm” to weighing conflicting claims with respect to “risks of harm.” Therefore, we claim that the harm principle is no longer a sufficient justification for governmental intervention in animal disease prevention. A policy that has to deal with and distribute conflicting risks of harm needs additional value assumptions that guide this process of assessment and distribution. We show that currently, policies are based on assumptions that are mainly economic considerations. In order to show the limitations of these considerations, we use the interests and position of keepers of backyard animals as an example. Based on the problems they faced during and after the recent outbreaks, we defend the thesis that in order to develop a sustainable animal disease policy other than economic assumptions need to be taken into account. (shrink)
A long tradition of psychological research has explored the distinction between characteristics that are part of the self and those that lie outside of it. Recently, a surge of research has begun examining a further distinction. Even among characteristics that are internal to the self, people pick out a subset as belonging to the true self. These factors are judged as making people who they really are, deep down. In this paper, we introduce the concept of the true self and (...) identify features that distinguish people’s understanding of the true self from their understanding of the self more generally. In particular, we consider recent findings that the true self is perceived as positive and moral, and that this tendency is actor-observer invariant and cross-culturally stable. We then explore possible explanations for these findings and discuss their implications for a variety of issues in psychology. (shrink)
Some ways of defending inequality against the charge that it is unjust require premises that egalitarians find easy to dismiss—statements, for example, about the contrasting deserts and/or entitlements of unequally placed people. But a defense of inequality suggested by John Rawls and elaborated by Brian Barry has often proved irresistible even to people of egalitarian outlook. The persuasive power of this defense of inequality has helped to drive authentic egalitarianism, of an old-fashioned, uncompromising kind, out of contemporary political philosophy. The (...) present essay is part of an attempt to bring it back in. (shrink)
1. The present paper is a continuation of my “Self-Ownership, World Ownership, and Equality,” which began with a description of the political philosophy of Robert Nozick. I contended in that essay that the foundational claim of Nozick's philosophy is the thesis of self-ownership, which says that each person is the morally rightful owner of his own person and powers, and, consequently, that each is free to use those powers as he wishes, provided that he does not deploy them aggressively against (...) others. To be sure, he may not harm others, and he may, if necessary, be forced not to harm them, but he should never be forced to help them, as people are in fact forced to help others, according to Nozick, by redistributive taxation. (shrink)
In her paper, “The Non-Governing Conception of Laws,” Helen Beebee argues that it is not a conceptual truth that laws of nature govern, and thus that one need not insist on a metaphysical account of laws that makes sense of their governing role. I agree with the first point but not the second. Although it is not a conceptual truth, the fact that laws govern follows straightforwardly from an important (though under-appreciated) principle of scientific theory choice combined with a highly (...) plausible claim about the connection between scientific theory choice and theory choice in metaphysics. I present and defend this argument and then show how the resulting understanding of governance gives rise to an especially strong version of recent explanatory circularity arguments against Humeanism about laws of nature. Finally, I present three options for a further understanding of the governance relation that are compatible with my argument. (shrink)
I present an argument for the view that laws ground their instances. I then outline two important consequences that follow if we accept the conclusion of this argument. First, the claim that laws ground their instances threatens to undermine a prominent recent attempt to make sense of the explanatory power of Humean laws by distinguishing between metaphysical and scientific explanation. And second, the claim that laws ground their instances gives rise to a novel argument against the view that grounding relations (...) are metaphysically necessary. (shrink)
I argue that the way the world appears to be plays an important role in standard scientific practice, and that therefore the way the world appears to be ought to play a similar role in metaphysics as well. I then show how the argument bears on a specific first-order debate in metaphysics—the debate over whether there are composite objects. This debate is often thought to be a paradigm case of a metaphysical debate that is largely insulated from scientific considerations, and (...) is often disparaged or avoided by naturalistically-inclined metaphysicians as a result. My argument below shows that this attitude is a mistake. The way in which metaphysical debates can be informed by our best science is more complex and far-reaching than is often acknowledged in the literature. (shrink)
In this book G. A. Cohen examines the libertarian principle of self-ownership, which says that each person belongs to himself and therefore owes no service or product to anyone else. This principle is used to defend capitalist inequality, which is said to reflect each person's freedom to do as as he wishes with himself. The author argues that self-ownership cannot deliver the freedom it promises to secure, thereby undermining the idea that lovers of freedom should embrace capitalism and the (...) inequality that comes with it. He goes on to show that the standard Marxist condemnation of exploitation implies an endorsement of self-ownership, since, in the Marxist conception, the employer steals from the worker what should belong to her, because she produced it. Thereby a deeply inegalitarian notion has penetrated what is in aspiration an egalitarian theory. Purging that notion from socialist thought, he argues, enables construction of a more consistent egalitarianism. (shrink)
In this essay I describe how contractarianism might approach interspecies welfare conflicts. I start by discussing a contractarian account of the moral status of nonhuman animals. I argue that contractors can agree to norms that would acknowledge the “moral standing” of some animals. I then discuss how the norms emerging from contractarian agreement might constrain any comparison of welfare between humans and animals. Contractarian agreement is likely to express some partiality to humans in a way that discounts the welfare of (...) some or all animals. While the norms emerging from the contract might be silent or inconsistent in some tragic or catastrophic cases, in most ordinary conflicts of welfare, contractors will agree to norms that produce some determinate resolution. What the agreement says can evolve depending upon how the contractors or the circumstances change. I close with some remarks on contractarian indeterminacy. (shrink)
In my dissertation, I present Hermann Cohen's foundation for the history and philosophy of science. My investigation begins with Cohen's formulation of a neo-Kantian epistemology. I analyze Cohen's early work, especially his contributions to 19th century debates about the theory of knowledge. I conclude by examining Cohen's mature theory of science in two works, The Principle of the Infinitesimal Method and its History of 1883, and Cohen's extensive 1914 Introduction to Friedrich Lange's History of Materialism. (...) In the former, Cohen gives an historical and philosophical analysis of the foundations of the infinitesimal method in mathematics. In the latter, Cohen presents a detailed account of Heinrich Hertz's Principles of Mechanics of 1894. Hertz considers a series of possible foundations for mechanics, in the interest of finding a secure conceptual basis for mechanical theories. Cohen argues that Hertz's analysis can be completed, and his goal achieved, by means of a philosophical examination of the role of mathematical principles and fundamental concepts in scientific theories. (shrink)
Temporal ersatzism is the view that past entities exist, but are not concrete. The view is analogous to modal ersatzism, according to which merely possible worlds exist, but are not concrete. The goal of this paper is to give the reader a sense of the scope of available temporal ersatzist views, the ways in which the analogy with modal ersatzism may be helpful in characterizing and defending those views, and the sorts of considerations that are relevant when evaluating particular versions (...) of temporal ersatzism. (shrink)
Cartwright (Synthese 121(1/2):3–27, 1999a; The dappled world, Cambridge University Press, Cambridge, 1999b) attacked the view that causal relations conform to the Markov condition by providing a counterexample in which a common cause does not screen off its effects: the prominent chemical factory. In this paper we suggest a new way to handle counterexamples to Markov causation such as the chemical factory. We argue that Cartwright’s as well as similar scenarios feature a certain kind of non-causal dependence that kicks in once (...) the common cause occurs. We then develop a representation of this specific kind of non-causal dependence that allows for modeling the problematic scenarios in such a way that the Markov condition is not violated anymore. (shrink)
In this critical notice of Kment's _Modality and Explanatory Reasoning_, we focus on Kment’s arguments for impossible worlds and on a key part of his discussion of the interactions between modality and explanation – the analogy that he draws between scientific and metaphysical explanation.
Actualism is the view that only actually existing things exist. Presentism is the view that only presently existing things exist. In this paper, I argue that being an actualist without also being a presentist is not as easy as many philosophers seem to think. A common objection to presentism is that there is an unavoidable conflict between presentism and relativity theory. But actualists who do not wish to be presentists cannot point to this relativity objection alone to support their position. (...) Unless they have some antecedent reason for thinking that actualism is more plausible than presentism, anyone who is moved by the relativity objection to give up presentism should be moved by a related objection to give up actualism as well. If there is a reason to be an actualist without also being a presentist, it must go beyond the relativity objection to presentism. (shrink)
This paper analyses J.S. Mill's theory on the relationships between individual autonomy and State powers. It will be argued that there is a significant discrepancy between Mill's general liberal statements aimed to secure individual largest possible autonomy and the specific examples which provide the government with quite wide latitude for interference in the public and private spheres. The paper outlines the boundaries of government interference in the Millian theory. Subsequently it describes Mill's elastic paternalism designed to prevent people from inflicting (...) harm upon others as well as upon themselves, from soft paternalism on issues like compulsory education to hard paternalism on very private matters such as marriage, having children, and divorce by consent. (shrink)
I argue against the common and influential view that non-trivial chances arise only when the fundamental laws are indeterministic. The problem with this view, I claim, is not that it conflicts with some antecedently plausible metaphysics of chance or that it fails to capture our everyday use of ‘chance’ and related terms, but rather that it is unstable. Any reason for adopting the position that non-trivial chances arise only when the fundamental laws are indeterministic is also a reason for adopting (...) a much stronger, and far less attractive, position. I suggest an alternative account, according to which chances are probabilities that play a certain explanatory role: they are probabilities that explain associated frequencies. (shrink)
In Bertrand Russell's 1903 Principles of Mathematics, he offers an apparently devastating criticism of the neo-Kantian Hermann Cohen's Principle of the Infinitesimal Method and its History (PIM). Russell's criticism is motivated by his concern that Cohen's account of the foundations of calculus saddles mathematics with the paradoxes of the infinitesimal and continuum, and thus threatens the very idea of mathematical truth. This paper defends Cohen against that objection of Russell's, and argues that properly understood, Cohen's views (...) of limits and infinitesimals do not entail the paradoxes of the infinitesimal and continuum. Essential to that defense is an interpretation, developed in the paper, of Cohen's positions in the PIM as deeply rationalist. The interest in developing this interpretation is not just that it reveals how Cohen's views in the PIM avoid the paradoxes of the infinitesimal and continuum. It also reveals some of what is at stake, both historically and philosophically, in Russell's criticism of Cohen. (shrink)
The dominant response to this problem of the criterion focuses on the alleged requirement that we need to know a belief source is reliable in order for us to acquire knowledge by that source. Let us call this requirement, “The KR principle”.
Recently, McGinn has proposed a new theory of disgust. This theory makes empirical claims as to the history and function of disgust, yet does not take into account contemporary scientific research on the subject. This essay evaluates his theory for its merits as an account of disgust, and as a piece of scholarship more generally, and finds it lacking.
Some theories of quantum mechanical phenomena endorse wave function realism, according to which the physical space we inhabit is very different from the physical space we appear to inhabit. In this paper I explore an argument against wave function realism that appeals to a type of simplicity that, although often overlooked, plays a crucial role in scientific theory choice. The type of simplicity in question is simplicity of fit between the way a theory says the world is and the way (...) the world appears to be. This argument can be understood as one way of spelling out the so-called “incredulous stare objection” that is sometimes leveled against surprising metaphysical theories. (shrink)
I argue against the common and influential view that non-trivial chances arise only when the fundamental laws are indeterministic. The problem with this view, I claim, is not that it conflicts with some antecedently plausible metaphysics of chance or that it fails to capture our everyday use of ‘chance’ and related terms, but rather that it is unstable. Any reason for adopting the position that non-trivial chances arise only when the fundamental laws are indeterministic is also a reason for adopting (...) a much stronger, and far less attractive, position. I suggest an alternative account, according to which chances are probabilities that play a certain explanatory role: they are probabilities that explain associated frequencies. 1 Introduction2 A Paradigm Case3 The Incompatibilist’s Criterion4 Against the Incompatibilist’s Criterion5 The Explanatory Criterion6 Conclusion. (shrink)
ABSTRACT Temporal eliminativism is the view that the present is privileged because past and future entities do not exist. Temporal ersatzism is the view that the present is privileged because, although past and future entities exist, they are not concrete. I argue that shifting from temporal eliminativism to temporal ersatzism can help to address objections to the former theory that are due to relativity theory—but only if temporal ersatzism is understood in a fairly specific way and only in so far (...) as the temporal ersatzist is willing to take on some prima facie surprising commitments. I close by showing how the claims that I make with respect to temporal ersatzism generalise to other theories of time on which the present is privileged, including McDaniel’s  presentist existential pluralism. (shrink)
The nonidentity problem is a deep puzzle challenging the moral intuition that what is bad must be bad for someone. The first part of the paper constructs a new theory of harming, whereas the second part builds on the conclusions of the first to offer a new solution to the NIP. The first part discusses the neglected question of when a burden inflicted in the context of overall benefitting can be discretized as a separate entity—only when it can, is it (...) possible to identify the burden as harm, and only then is it possible to harm in bringing into overall good existence. The second part explains how, in those cases where creating is indeed harming, we can use the logic of concept expansion to construct a concept of wronging that applies to creation cases. (shrink)
I argue that there are such things as nomological probabilities—probabilities that play a certain explanatory role with respect to stable, long-run relative frequencies. Indeed, I argue, we should be willing to accept nomological probabilities even if they turn out to be metaphysically weird or even wholly sui generis entities. I then give an example of one way in which this argument should shape future work on the metaphysics of chance by describing a challenge to a common group of analyses of (...) objective probability—Humean analyses— understood as analyses of nomological probability. (shrink)
Next SectionWe discuss the thesis formulated by Hintikka (1973) that certain natural language sentences require non-linear quantification to express their meaning. We investigate sentences with combinations of quantifiers similar to Hintikka's examples and propose a novel alternative reading expressible by linear formulae. This interpretation is based on linguistic and logical observations. We report on our experiments showing that people tend to interpret sentences similar to Hintikka sentence in a way consistent with our interpretation.
Disgust, the emotion of rotting carcasses and slimy animalitos, finds itself at the center of several critical questions about human culture and cognition. This article summarizes recent developments, identify active points of debate, and provide an account of where the field is heading next.
Palliative care names as one of its central aims to prevent and relieve suffering. Following the concept of “total pain”, which was first introduced by Cicely Saunders, PC not only focuses on the physical dimension of pain but also addresses the patient’s psychological, social, and spiritual suffering. However, the goal to relieve suffering can paradoxically lead to a taboo of suffering and imply adverse consequences. Two scenarios are presented: First, PC providers sometimes might fail their own ambitions. If all other (...) means prove ineffective terminal sedation can still be applied as a last resort, though. However, it may be asked whether sedating a dying patient comes close to eliminating suffering by eliminating the sufferer and hereby resembles physician-assisted suicide, or euthanasia. As an alternative, PC providers could continue treatment, even if it so far prove unsuccessful. In that case, either futility results or the patient might even suffer from the perpetuated, albeit fruitless interventions. Second, some patients possibly prefer to endure suffering instead of being relieved from it. Hence, they want to forgo the various bio-psycho-socio-spiritual interventions. PC providers’ efforts then lead to paradoxical consequences: Feeling harassed by PC, patients could suffer even more and not less. In both scenarios, suffering is placed under a taboo and is thereby conceptualised as not being tolerable in general. However, to consider suffering essentially unbearable might promote assisted dying not only on an individual but also on a societal level insofar as unbearable suffering is considered a criterion for euthanasia or PAS. (shrink)
A series of recent arguments purport to show that most counterfactuals of the form if A had happened then C would have happened are not true. These arguments pose a challenge to those of us who think that counterfactual discourse is a useful part of ordinary conversation, of philosophical reasoning, and of scientific inquiry. Either we find a way to revise the semantics for counterfactuals in order to avoid these arguments, or we find a way to ensure that the relevant (...) counterfactuals, while not true, are still assertible. I argue that regardless of which of these two strategies we choose, the natural ways of implementing these strategies all share a surprising consequence: they commit us to a particular metaphysical view about chance. (shrink)