This paper examines the connections between triadism and processuality in Peirce’s semiotics by comparing two reducibility theses. Peirce’s thesis regarding the irreducibility of triads and its corollary in semiotics, the irreducibility of signs, is compared with the process metaphysical thesis regarding the irreducibility of processes. The comparison indicates that there is a connection between the irreducibility of signs and the irreducibility of processes; that the triadic condition of the sign entails process metaphysical commitments; and this in turn urges us to (...) consider the ontology of the sign from a process metaphysical perspective. (shrink)
This paper analyzes William A. Dembski’s theory of intelligent design. According to Dembski, it is possible to empirically detect signs of intelligence in the world by examining properties of observed events. In order to detect design, Dembski has developed the criterion of specified complexity, by means of which he claims to be able to distinguish events that are designed from those that are caused by necessity or chance. Five problems regarding Dembski’s theory are identified and discussed. It is revealed that (...) Dembski’s theory is not rigorously enough defined to be deemed to be a scientific theory. (shrink)
This is the best collection of essays on Rorty’s philosophy that has been published in the last decade. It will be of great interest not only to Rorty specialists but to anyone concerned with the difficulties contemporary analytic philosophy faces in its search for a viable self-understanding. The contributors are Barry Allen, Akeel Bilgrami, Jacques Bouveresse, Robert Brandom, James Conant, Donald Davidson, Daniel Dennett, Jürgen Habermas, John McDowell, Hilary Putnam, Bjørn Ramberg, and Michael Williams. Rorty himself has also written an (...) essay, plus individual and fairly extensive replies to each of his critics. (shrink)
One of the principles on how to act under moral uncertainty, My Favourite Theory, says roughly that a morally conscientious agent chooses an option that is permitted by the most credible moral theory. In defence of this principle, we argue that it prescribes consistent choices over time, without relying on intertheoretic comparisons of value, while its main rivals are either plagued by moral analogues of money pumps or in need of a method for making non-arbitrary intertheoretic comparisons. We rebut the (...) arguments that have been levelled against My Favourite Theory and offer some arguments against intertheoretic comparisons of value. (shrink)
In the 1960’s, Lars Bergström and Hector-Neri Castañeda noticed a problem with alternative acts and consequentialism. The source of the problem is that some performable acts are versions of other performable acts and the versions need not have the same consequences as the originals. Therefore, if all performable acts are among the agent’s alternatives, act consequentialism yields deontic paradoxes. A standard response is to restrict the application of act consequentialism to certain relevant alternative sets. Many proposals are based on some (...) variation of maximalism, that is, the view that act consequentialism should only be applied to maximally specific acts. In this paper, I argue that maximalism cannot yield the right prescriptions in some cases where one can either (i) form at once the intention to do an immediate act and form at a later time the intention to do a succeeding act or (ii) form at once the intention to do both acts and where the consequences of (i) and (ii) differ in value. Maximalism also violates normative invariance, that is, the condition that if an act is performable in a situation, then the normative status of the act does not depend on what acts are performed in the situation. Instead of maximalism, I propose that the relevant alternatives should be the exhaustive combinations of acts the agent can jointly perform without performing any other act in the situation. In this way, one avoids the problem of act versions without violating normative invariance. Another advantage is that one can adequately differentiate between possibilities like (i) and (ii). (shrink)
Derek Parfit’s argument against the platitude that identity is what matters in survival does not work given his intended reading of the platitude, namely, that what matters in survival to some future time is being identical with someone who is alive at that time. I develop Parfit’s argument so that it works against the platitude on this intended reading.
John Locke’s account of personal identity is usually thought to have been proved false by Thomas Reid’s simple ‘Gallant Officer’ argument. Locke is traditionally interpreted as holding that your having memories of a past person’s thoughts or actions is necessary and sufficient for your being identical to that person. This paper argues that the traditional memory interpretation of Locke’s account is mistaken and defends a memory continuity view according to which a sequence of overlapping memories is necessary and sufficient for (...) personal identity. On this view Locke is not vulnerable to the Gallant Officer argument. (shrink)
In this article, I argue that the small-improvement argument fails since some of the comparisons involved in the argument might be indeterminate. I defend this view from two objections by Ruth Chang, namely the argument from phenomenology and the argument from perplexity. There are some other objections to the small-improvement argument that also hinge on claims about indeterminacy. John Broome argues that alleged cases of value incomparability are merely examples of indeterminacy in the betterness relation. The main premise of his (...) argument is the much-discussed collapsing principle. I offer a new counterexample to this principle and argue that Broome's defence of the principle is not cogent. On the other hand, Nicolas Espinoza argues that the small-improvement argument fails as a result of the mere possibility of evaluative indeterminacy. I argue that his objection is unsuccessful. (shrink)
During the past 40 years, the Philosophy for Children movement has developed a dialogical framework for education that has inspired people both inside and outside academia. This article concentrates on analysing the historical development in general and then taking a more rigorous look at the recent discourse of the movement. The analysis proceeds by examining the changes between the so-called first and second generation, which suggests that Philosophy for Children is adapting to a postmodern world by challenging the humanistic ideas (...) of first-generation authors. A new understanding of childhood is presented by second-generation authors as giving possibilities for the subject to emerge in truly philosophical encounters. This article tries to show some of the possibilities and limits of such an understanding by considering the views in the light of general educational theorisations concerning pedagogical action. The continental tradition of European educational discourse, especially in the German-speaking regions, has stressed a necessity for asymmetry in the educational relationship. This line of thought is in conflict with the idea of a symmetrical, communal emergent system, which seems to be at the heart of second-generation understanding of educational philosophical dialoguing. The concluding argument states that in education we are always confronted with questions about purpose and aims, which have a special character in relation to pure philosophy /dialogue, although the philosophical/dialogical dimension is necessary for the emergence of unique subjectivity. (shrink)
In an alleged counter-example to the completeness of rational preferences, a career as a clarinettist is compared with a career in law. It seems reasonable to neither want to judge that the law career is at least as preferred as the clarinet career nor want to judge that the clarinet career is at least as preferred as the law career. The two standard interpretations of examples of this kind are, first, that the examples show that preferences are rationally permitted to (...) be incomplete and, second, that the examples show that preferences are rationally permitted to be indeterminate. In this paper, I shall argue that the difference between these interpretations is crucial for the money-pump argument for transitivity, which is the standard argument that rational preferences are transitive. I shall argue that the money-pump argument for transitivity fails if preferences are rationally permitted to be incomplete but that it works if preferences are rationally permitted to be indeterminate and rationally required to be complete. (shrink)
In this paper, I argue against defining either of ‘good’ and ‘better’ in terms of the other. According to definitions of ‘good’ in terms of ‘better’, something is good if and only if it is better than some indifference point. Against this approach, I argue that the indifference point cannot be defined in terms of ‘better’ without ruling out some reasonable axiologies. Against defining ‘better’ in terms of ‘good’, I argue that this approach either cannot allow for the incorruptibility of (...) intrinsic goodness or it breaks down in cases where both of the relata of ‘better’ are bad. (shrink)
Joshua Gert and Wlodek Rabinowicz have developed frameworks for value relations that are rich enough to allow for non-standard value relations such as parity. Yet their frameworks do not allow for any non-standard preference relations. In this paper, I shall defend a symmetry between values and preferences, namely, that for every value relation, there is a corresponding preference relation, and vice versa. I claim that if the arguments that there are non-standard value relations are cogent, these arguments, mutatis mutandis, also (...) show that there are non-standard preference relations. Hence frameworks of Gert and Rabinowicz's type are either inadequate since there are cogent arguments for both non-standard value and preference relations and these frameworks deny this, or they lack support since the arguments for non-standard value relations are unconvincing. Instead, I propose a simpler framework that allows for both non-standard value and preference relations. (shrink)
Critical-Range Utilitarianism is a variant of Total Utilitarianism which can avoid both the Repugnant Conclusion and the Sadistic Conclusion in population ethics. Yet Standard Critical-Range Utilitarianism entails the Weak Sadistic Conclusion, that is, it entails that each population consisting of lives at a bad well-being level is not worse than some population consisting of lives at a good well-being level. In this paper, I defend a version of Critical-Range Utilitarianism which does not entail the Weak Sadistic Conclusion. This is made (...) possible by a fourth category of absolute value in addition to goodness, badness, and neutrality. (shrink)
The small-improvement argument is usually considered the most powerful argument against comparability, viz the view that for any two alternatives an agent is rationally required either to prefer one of the alternatives to the other or to be indifferent between them. We argue that while there might be reasons to believe each of the premises in the small-improvement argument, there is a conflict between these reasons. As a result, the reasons do not provide support for believing the conjunction of the (...) premises. Without support for the conjunction of the premises, the small-improvement argument for incomparability fails. (shrink)
The money-pump argument is the standard argument for the acyclicity of rational preferences. The argument purports to show that agents with cyclic preferences are in some possible situations forced to act against their preference. In the usual, diachronic version of the money-pump argument, such agents accept a series of trades that leaves them worse off than before. Two stock objections are (i) that one may get the drift and refuse the trades and (ii) that one may adopt a plan to (...) only accept some of the trades. This article argues that these objections are irrelevant. If the diachronic money-pump argument is cogent, so is a more direct synchronic argument. The upshot is that the standard objections to the diachronic money-pump argument do not affect this simpler synchronic argument. Hence the standard objections to the money-pump argument for acyclicity are irrelevant. (shrink)
Although organizational and situational factors have been found to predict burnout, not everyone employed at the same workplace develops it, suggesting that becoming burnt out is a complex, multifaceted phenomenon. The aim of this study was to elucidate perceptions of conscience, stress of conscience, moral sensitivity, social support and resilience among two groups of health care personnel from the same workplaces, one group on sick leave owing to medically assessed burnout (n = 20) and one group who showed no indications (...) of burnout (n = 20). The results showed that higher levels of stress of conscience, a perception of conscience as a burden, having to deaden one’s conscience in order to keep working in health care and perceiving a lack of support characterized the burnout group. Lower levels of stress of conscience, looking on life with forbearance, a perception of conscience as an asset and perceiving support from organizations and those around them (social support) characterized the non-burnout group. (shrink)
In Cavell (1994), the ability to follow and produce Austinian examples of ordinary language use is compared with the faculty of perfect pitch. Exploring this comparison, I clarify a number of central and interrelated aspects of Cavell's philosophy: (1) his way of understanding Wittgenstein's vision of language, and in particular his claim that this vision is "terrifying," (2) the import of Wittgenstein's vision for Cavell's conception of the method of ordinary language philosophy, (3) Cavell's dissatisfaction with Austin, and in particular (...) his claim that Austin is not clear about the nature and possible achievements of his own philosophical procedures, and (4) Cavell's notion that the temptation of skepticism is perennial and incurable. Cavell's reading of Wittgenstein is related to that of John McDowell. Like McDowell, Cavell takes Wittgenstein to be saying that the traditional attempt to justify our practices from an external standpoint is misguided, since such detachment involves losing sight of those conceptual and perceptual capacities in terms of which a practice is understood by its engaged participants. Unlike McDowell, however, Cavell consistently rejects the idea that philosophical clearsightedness can or should free us from that fear of groundlessness which motivates the traditional search for external justification. (shrink)
Andy Egan argues that neither evidential nor causal decision theory gives the intuitively right recommendation in the cases The Smoking Lesion, The Psychopath Button, and The Three-Option Smoking Lesion. Furthermore, Egan argues that we cannot avoid these problems by any kind of ratificationism. This paper develops a new version of ratificationism that gives the right recommendations. Thus, the new proposal has an advantage over evidential and casual decision theory and standard ratificationist evidential decision theory.
Given reductionism about people, personal persistence must fundamentally consist in some kind of impersonal continuity relation. Typically, these continuity relations can hold from one to many. And, if they can, the analysis of personal persistence must include a non-branching clause to avoid non-transitive identities or multiple occupancy. It is far from obvious, however, what form this clause should take. This paper argues that previous accounts are inadequate and develops a new proposal.
The standard argument for the claim that rational preferences are transitive is the pragmatic money-pump argument. However, a money-pump only exploits agents with cyclic strict preferences. In order to pump agents who violate transitivity but without a cycle of strict preferences, one needs to somehow induce such a cycle. Methods for inducing cycles of strict preferences from non-cyclic violations of transitivity have been proposed in the literature, based either on offering the agent small monetary transaction premiums or on multi-dimensional preferences. (...) This paper argues that previous proposals have been flawed and presents a new approach based on the dominance principle. (shrink)
The aim of the Consequence Argument is to show that, if determinism is true, no one has, or ever had, any choice about anything. In the stock version of the argument, its two premisses state that no one is, or ever was, able to act so that the past would have been different and no one is, or ever was, able to act so that the laws of nature would have been different. This stock version fails, however, because it requires (...) an invalid inference rule. The standard response is to strengthen both premisses by replacing ‘would’ with ‘might’. While this response ensures validity, it weakens the argument, since it strengthens the premisses. I show that we can do better: We can keep the weak reading of one premiss and just strengthen the other. This provides two versions of the Consequence Argument which are stronger than the standard revision. (shrink)
Any theory that analyses personal identity in terms of phenomenal continuity needs to deal with the ordinary interruptions of our consciousness that it is commonly thought that a person can survive. This is the bridge problem. The present paper offers a novel solution to the bridge problem based on the proposal that dreamless sleep need not interrupt phenomenal continuity. On this solution one can both hold that phenomenal continuity is necessary for personal identity and that persons can survive dreamless sleep.
Abstract In their discussions and criticisms of the idea that language use is essentially a matter of following rules, Davidson and Cavell both invoke as counterexamples instances of intelligible linguistic innovation. Davidson?s favorite examples are malapropisms. Cavell focuses instead on what he calls projections. This paper clarifies some important differences between malapropisms and projections, conceived as paradigmatic forms of linguistic innovation. If malapropisms are treated as exemplary it will be natural to conclude, with Davidson, that a shared practice, be it (...) rule-governed or not, matters only instrumentally ? as something that may enhance but is neither necessary nor sufficient for successful communication. By contrast, if Cavellian projections are seen as exemplary, a shared practice will be conceived not only as essential to the possibility of meaningful linguistic innovation, but as already permeated by the sort of creativity of which projections are only particularly striking examples. It is also argued that malapropisms are not particularly convincing as counterexamples to the sort of view Davidson wants to reject. Cavellian projections, on the other hand, are powerful as counterexamples, and reflecting on the nature of their inventiveness is crucial to understanding and seeing the plausibility of Cavell?s own conception of language. (shrink)
Moral wrongness comes in degrees. On a consequentialist view of ethics, the wrongness of an act should depend, I argue, in part on how much worse the act's consequences are compared with those of its alternatives and in part on how difficult it is to perform the alternatives with better consequences. I extend act consequentialism to take this into account, and I defend three conditions on consequentialist theories. The first is consequentialist dominance, which says that, if an act has better (...) consequences than some alternative act, then it is not more wrong than the alternative act. The second is consequentialist supervenience, which says that, if two acts have equally good consequences in a situation, then they have the same deontic status in the situation. And the third is consequentialist continuity, which says that, for every act and for any difference in wrongness δ greater than zero, there is an arbitrarily small improvement of the consequences of the act which would, other things being equal, not change the wrongness of that act or any alternative by more than δ. I defend a proposal that satisfies these conditions. (shrink)
This article develops a new measure of freedom of choice based on the proposal that a set offers more freedom of choice than another if, and only if, the expected degree of dissimilarity between a random alternative from the set of possible alternatives and the most similar offered alternative in the set is smaller. Furthermore, a version of this measure is developed, which is able to take into account the values of the possible options.
According to the widely held anti-aggregation principle, it is wrong to save a larger number of people from minor harms rather than a smaller number from much more serious harms. This principle is a central part of many influential and anti-utilitarian ethical theories. According to the sequential-dominance principle, one does something wrong if one knowingly performs a sequence of acts whose outcome would be worse for everyone than the outcome of an alternative sequence of acts. The intuitive appeal of the (...) sequential-dominance principle should be obvious; everyone is knowingly made worse off if it is violated. In this paper, I present a number of cases where one is forced to violate either the anti-aggregation principle or the sequential-dominance principle. I show that these principles conflict regardless of whether one accepts a counterfactual or a temporal, worsening view of harm. Moreover, I show that this result holds regardless of how much worse a harm has to be in order to count as a much more serious harm. (shrink)
If ‘F’ is a predicate, then ‘Fer than’ or ‘more F than’ is a corresponding comparative relational predicate. Concerning such comparative relations, John Broome’s Collapsing Principle states that, for any x and y, if it is false that y is Fer than x and not false that x is Fer than y, then it is true that x is Fer than y. Luke Elson has recently put forward two alleged counter-examples to this principle, allegedly showing that it yields contradictions if (...) there are borderline cases. In this paper, I argue that the Collapsing Principle does not rule out borderline cases, but I also argue that it is implausible. (shrink)
s distinction between perfect and imperfect procedural justice relies on the notion of a procedure that is guaranteed to lead to a certain independently specifiable result. Clarification of this notion shows that it makes the distinction between perfect and imperfect procedural justice unreal, in the following sense: whether, in a particular case, we have an instance of perfect or imperfect procedural justice depends only on how we choose to specify the procedure that is being followed. Key Words: procedural justice (...) John Rawls. (shrink)
In this paper we shed new light on the Argument from Disagreement by putting it to test in a computer simulation. According to this argument widespread and persistent disagreement on ethical issues indicates that our moral opinions are not influenced by any moral facts, either because no such facts exist or because they are epistemically inaccessible or inefficacious for some other reason. Our simulation shows that if our moral opinions were influenced at least a little bit by moral facts, we (...) would quickly have reached consensus, even if our moral opinions were affected by factors such as false authorities, external political shifts, and random processes. Therefore, since no such consensus has been reached, the simulation gives us increased reason to take seriously the Argument from Disagreement. Our conclusion is however not conclusive; the simulation also indicates what assumptions one has to make in order to reject the Argument from Disagreement. The simulation algorithm we use builds on the work of Hegselmann and Krause (J Artif Soc Social Simul 5(3); 2002, J Artif Soc Social Simul 9(3), 2006). (shrink)
Frege’s use of a judgment stroke in his conceptual notation has been a matter of controversy, at least since Wittgenstein rejected it as “logically quite meaningless” in the Tractatus. Recent defenders of Frege include Tyler Burge, Nicolas Smith and Wolfgang Künne, whereas critics include William Taschek and Edward Kanterian. Against the background of these defenses and criticisms, the present paper argues that Frege faces a dilemma the two horns of which are related to his early and later conceptions of asserted (...) content respectively. On the one hand, if content is thought of as something that has propositional structure, then the judgment stroke is superfluous. On the other hand, if what is to the right of the judgment stroke is conceived as a sort of name designating a truth-value, then there is no consistent way to avoid construing the judgment stroke as a kind of predicate, and thereby fail to do justice to the act-character of judgment and assertion. (shrink)
This essay argues that Carl Schmitt’s postwar writings offer an original critique of biotechnology and utopian thinking. Examining the classics of utopian literature from Plato to Thomas More and Aldous Huxley, Schmitt illustrates the rise of utopianism that aims to transform human nature and even produce an artificial “human-machine.” Schmitt discovers a counterimage to the emerging era of biotechnology from a katechontic form of Christianity and maintains that human beings must recognize their shared humanity in God, warning us that without (...) a realm of transcendence, the enemy no longer offers an existential mirror but begins to incarnate foreign values, which must be destroyed completely. By comparing Schmitt with Michel Foucault and Donna Haraway, it is also argued that Schmitt’s thinking unlocks a novel path to exploring the meaning and histories of biopolitics and posthumanism. From a Schmittian perspective, Foucault’s depiction of biopolitics appears as a mere prelude to the coming age of biotechnology that will lead us into a posthuman era. Demonstrating interesting contrasts with Haraway’s utopian vision of the cyborg, it is maintained that Schmitt’s thinking offers a distinctively conservative-Christian critique of posthumanism. (shrink)
Immanuel Kant’s conception of ethics and aesthetics, including his philosophy of judgment and practical knowledge, are widely discussed today among scholars in various fields: philosophy, political science, aesthetics, educational science, and others. His ideas continue to inspire and encourage an ongoing interdisciplinary dialogue, leading to an increasing awareness of the interdependence between societies and people and a clearer sense of the challenges we face in cultivating ourselves as moral beings.Early on in his career, Cavell began to recognize the strong connection (...) between Kant’s aesthetics (as it finds its expression in the Critique of the Power of Judgment) and the claims of ordinary language .. (shrink)
In order to account for non-traditional preference relations the present paper develops a new, richer framework for preference relations. This new framework provides characterizations of non-traditional preference relations, such as incommensurateness and instability, that may hold when neither preference nor indifference do. The new framework models relations with swaps, which are conceived of as transfers from one alternative state to another. The traditional framework analyses dyadic preference relations in terms of a hypothetical choice between the two compared alternatives. The swap (...) framework extends this approach by analysing dyadic preference relations in terms of two hypothetical choices: the choice between keeping the first of the compared alternatives or swapping it for the second; and the choice between keeping the second alternative or swapping it for the first. (shrink)
Which concept is the more primitive when it comes to the functioning of the logical constants: representation or inference? Via a discussion of Arthur Prior’s famous mock connective “tonk” and a couple of responses to Prior by J. T. Stevenson and Nuel Belnap, it is argued that early Wittgenstein’s answer is neither. Instead, he takes representation and inference to be equally basic and mutually dependent notions. The nature and significance of this mutual dependence is made clear by an investigation into (...) the Tractarian notion of a proposition. It is further argued that even if Wittgenstein later abandoned the Tractarian conception of what a proposition is, he never gave up the idea that inference and representation play interdependent and equally fundamental roles in logic. (shrink)
The aim of this study was to elucidate municipal night registered nurses’ (RNs) experiences of the meaning of caring in nursing. The research context involved all night duty RNs working in municipal care of older people in a medium-sized municipality located in central Sweden. The meaning of caring in nursing was experienced as: caring for by advocacy, superior responsibility in caring, and consultative nursing service. The municipal night RNs’ experience of caring is interpreted as meanings in paradoxes: ‘being close at (...) distance’, the condition of ‘being responsible with insignificant control’, and ‘being interdependently independent’. The RNs’ experience of the meaning of caring involves focusing on the care recipient by advocating their perspectives. The meaning of caring in this context is an endeavour to grasp an overall caring responsibility by responding to vocational and personal demands regarding the issue of being a RN, in guaranteeing ethical, qualitative and competent care for older people. (shrink)
In this paper, we aim to show that a study of Gilbert Ryle’s work has much to contribute to the current debate between intellectualism and anti-intellectualism with respect to skill and know-how. According to Ryle, knowing how and skill are distinctive from and do not reduce to knowing that. What is often overlooked is that for Ryle this point is connected to the idea that the distinction between skill and mere habit is a category distinction, or a distinction in form. (...) Criticizing the reading of Ryle presented by Jason Stanley, we argue that once the formal nature of Ryle’s investigation is recognized it becomes clear that his dispositional account is not an instance of reductionist behaviorism, and that his regress argument has a broader target than Stanley appears to recognize. (shrink)
If calculation and judgment are to answer the question Which way?, perfectionist thinking is a response to the way’s being lost.In his thought-provoking exploration of Cavellian perfectionism—which he sees as identical with what Cavell himself prefers to call Emersonian perfectionism—Paul Guyer quotes the following passage from Cities of Words:Emerson’s writing, in demonstrating our lack of given means of making ourselves intelligible (to ourselves, to others), details the difficulties in the way of possessing those means, and demonstrates that they are at (...) hand. This thought, implying our need of invention and transformation, expresses two dominating themes of perfectionism.1Guyer makes the following comment:This .. (shrink)