Kripke claims that certainkind terms, particularly natural kind terms,are, like names, rigid designators. However,kind terms are more complicated than names aseach is connected both to a principle ofinclusion and an extension. So, there is aquestion regarding what it is that rigidlydesignating kind terms rigidly designate. Inthis paper, I assume that there are rigidlydesignating kind terms and attempt to answerthe question as to what it is that they rigidlydesignate. I then use this analysis of rigidlydesignating kind terms to show how Kripke''sreasoning (...) regarding the necessity of `Hesperusis Phosphorus'' can be extended to statementsinvolving kind terms like `Water is H2O''and `Tigers are mammals''. (shrink)
In his paper ‘Theism and the philosophy of nature’, Ben Cordry argues that theism's conception of nature has been falsified. In this response, I argue that the universe in many ways conforms to theistic expectations, and that there is no presumption that a divinely ordered world will take the form that Cordry proposes. (Published Online July 10 2006).
Mill's most famous departure from Bentham is his distinction between higher and lower pleasures. This article argues that quality and quantity are independent and irreducible properties of pleasures that may be traded off against each other – as in the case of quality and quantity of wine. I argue that Mill is not committed to thinking that there are two distinct kinds of pleasure, or that ‘higher pleasures’ lexically dominate lower ones, and that the distinction is compatible with hedonism. I (...) show how this interpretation not only makes sense of Mill but allows him to respond to famous problems, such as Crisp's Haydn and the oyster and Nozick's experience machine. (shrink)
Andrew Eshleman has argued that atheists can believe in God by being fully engaged members of religious communities and using religious discourse in a non-realist way. He calls this position 'fictionalism' because the atheist takes up religion as a useful fiction. In this paper I critique fictionalism along two lines: that it is problematic to successfully be a fictionalist and that fictionalism is unjustified. Reflection on fictionalism will point to some wider problems with religious anti-realism.
While Hume has often been held to have been an agnostic or atheist, several contemporary scholars have argued that Hume was a theist. These interpretations depend chiefly on several passages in which Hume allegedly confesses to theism. In this paper, I argue against this position by giving a threshold characterization of theism and using it to show that Hume does not confess. His most important confession does not cross this threshold and the ones that do are often expressive rather than (...) assertive. I then argue that Hume is best interpreted as an atheist. Instead of interpreting Hume as a proto-logical positivist and arguing on the basis of Hume’s theories of meaning and method, I show that textually he appears to align himself with atheism, that his arguments in the Dialogues on Natural Religion support atheism, and that this position is most consistent with Hume’s naturalism. But, I hold that his atheism is soft and therefore distinct from that of his peers like Baron d’Holbach—while Hume really does reject theism, he neither embraces a dogmatically materialist position nor takes up a purely polemical stance towards theism. I conclude by suggesting several ways in which Hume’s atheistic philosophy of religion is relevant to contemporary discussions. (shrink)
In this paper I argue that Poston and Dougherty's attempt to undermine the problem of divine hiddenness by using the notion of belief de re is problematic at best. They hold that individuals who appear to be unbelievers (because they are de dicto unbelievers) may actually be de re believers. I construct a set of conditions on ascribing belief de re to show that it is prima facie implausible to claim that seemingly inculpable and apparent unbelievers are really de re (...) believers. Thus, while it is indeed possible that a de dicto unbeliever is a de re believer, it is unlikely that this has sufficiently general application to actual individuals to alleviate the problem of divine hiddenness. (shrink)
In this article, I argue that if God existed as an absolute, cosmic sovereign, there would be a right to know this, which God would fulfill either by giving people such knowledge or positioning them so that they can achieve it. I then argue that there are many cases of different types in which this right, were it to exist, would be unfulfilled. Therefore, there is no God in this sense. While I focus on the right to know, my argument (...) generalizes that no being or force oversees the world in such a way as to ensure the achievability of spiritual fulfillment. (shrink)
In this paper I argue that traditional theism, in its theory, history, and practice has implications for the philosophy of nature. Namely, nature should be designed around aesthetic or meaningful principles and nature should be engineered in order to fulfil a fairly well defined set of purposes. If theism is true, we should be able to study nature objectively as a teleological system. After all, the teleological structure of nature is more important to us as spiritual beings than its mechanisms. (...) Since a teleological philosophy of nature is no longer viable, traditional theism is untenable. (Published Online July 10 2006). (shrink)
Consequentialists typically think that the moral quality of one's conduct depends on the difference one makes. But consequentialists may also think that even if one is not making a difference, the moral quality of one's conduct can still be affected by whether one is participating in an endeavour that does make a difference. Derek Parfit discusses this issue – the moral significance of what I call ‘participation’ – in the chapter of Reasons and Persons that he devotes to what he (...) calls ‘moral mathematics’. In my paper, I expose an inconsistency in Parfit's discussion of moral mathematics by showing how it gives conflicting answers to the question of whether participation matters. I conclude by showing how an appreciation of Parfit's error sheds some light on consequentialist thought generally, and on the debate between act- and rule-consequentialists specifically. (shrink)
_Foucault’s Law_ is the first book in almost fifteen years to address the question of Foucault’s position on law. Many readings of Foucault’s conception of law start from the proposition that he failed to consider the role of law in modernity, or indeed that he deliberately marginalized it. In canvassing a wealth of primary and secondary sources, Ben Golder and Peter Fitzpatrick rebut this argument. They argue that rather than marginalize law, Foucault develops a much more radical, nuanced and coherent (...) theory of law than his critics have acknowledged. For Golder and Fitzpatrick, Foucault’s law is not the contained creature of conventional accounts, but is uncontainable and illimitable. In their radical re-reading of Foucault, they show how Foucault outlines a concept of law which is not tied to any given form or subordinated to a particular source of power, but is critically oriented towards alterity, new possibilities and different ways of being. _Foucault’s Law_ is an important and original contribution to the ongoing debate on Foucault and law, engaging not only with Foucault’s diverse writings on law and legal theory, but also with the extensive interpretive literature on the topic. It will thus be of interest to students and scholars working in the fields of law and social theory, legal theory and law and philosophy, as well as to students of Foucault’s work generally. (shrink)
Mill's most famous departure from Bentham is his distinction between higher and lower pleasures. This article argues that quality and quantity are independent and irreducible properties of pleasures that may be traded off against each other higher pleasures’ lexically dominate lower ones, and that the distinction is compatible with hedonism. I show how this interpretation not only makes sense of Mill but allows him to respond to famous problems, such as Crisp's Haydn and the oyster and Nozick's experience machine.
Mill’s harm principle is commonly supposed to rest on a distinction between self-regarding conduct, which is not liable to interference, and other-regarding conduct, which is. As critics have noted, this distinction is difficult to draw. Furthermore, some of Mill’s own applications of the principle, such as his forbidding of slavery contracts, do not appear to fit with it. This article proposes that the self-regarding/other-regarding distinction is not in fact fundamental to Mill’s harm principle. The sphere of protected liberty includes not (...) only self-regarding conduct, but also actions that affect only consenting others. On the other hand, the occasional permissibility of interfering with self-regarding conduct can plausibly be explained by reference to the agent’s consent. Thus, the more important distinction appears to be that between consensual and non-consensual harm, rather than that between the self-regarding and non-self-regarding action. That is, interference can be justified in order to prevent non-consensual harms, but not to prevent consensual harms. It is argued that the harm principle, thus reformulated, both captures Mill’s intentions and is a substantively plausible position. (shrink)
In this article, I attempt to resuscitate the perennially unfashionable distinctive feeling theory of pleasure (and pain), according to which for an experience to be pleasant (or unpleasant) is just for it to involve or contain a distinctive kind of feeling. I do this in two ways. First, by offering powerful new arguments against its two chief rivals: attitude theories, on the one hand, and the phenomenological theories of Roger Crisp, Shelly Kagan, and Aaron Smuts, on the other. Second, by (...) showing how it can answer two important objections that have been made to it. First, the famous worry that there is no felt similarity to all pleasant (or unpleasant) experiences (sometimes called ‘the heterogeneity objection’). Second, what I call ‘Findlay’s objection’, the claim that it cannot explain the nature of our attraction to pleasure and aversion to pain. (shrink)
In a series of recent publications, Jeffrey King (The nature and structure of content, 2007; Proc Aristot Soc 109(3):257–277, 2009; Philos Stud, 2012) argues for a view on which propositions are facts. He also argues against views on which propositions are set-theoretical objects, in part because such views face Benacerraf problems. In this paper, we argue that, when it comes to Benacerraf problems, King’s view doesn’t fare any better than its set-theoretical rivals do. Finally, we argue that his view faces (...) a further Benacerraf problem, one that threatens to undercut his explanation of why propositions have truth-conditions. If correct, our arguments undercut King’s main motivation for accepting his view over its rivals. (shrink)
In this book, Ben-Yami reassesses the way Descartes developed and justified some of his revolutionary philosophical ideas. The first part of the book shows that one of Descartes' most innovative and influential ideas was that of representation without resemblance. Ben-Yami shows how Descartes transfers insights originating in his work on analytic geometry to his theory of perception. The second part shows how Descartes was influenced by the technology of the period, notably clockwork automata, in holding life to be a mechanical (...) phenomenon, reducing the soul to the mind and considering it immaterial. Ben-Yami explores the later role of the digital computer in Turing's criticism of Descartes' ideas. The last part discusses the Meditations: far from starting everything afresh without presupposing anything that can be doubted, Descartes' innovations in the dream argument, the cogito and elsewhere are modifications of old ideas based upon considerations issuing from his separately developed theories, formed under the influence of the technology, mathematics and science of his age. (shrink)
In this paper, I argue that, when it comes to explaining what can be described as “representational” properties of propositions, Soames’s new conception of propositions—on which the proposition that Seattle is sunny is the act of predicating the property being sunny of Seattle and to entertain that proposition is to perform that act—does not have an advantage over traditional ones.
According to welfarism about value, something is good simpliciter just in case it is good for some being or beings. In her recent Presidential Address to the American Philosophical Association, “Good-For-Nothings”, Susan Wolf argues against welfarism by appeal to great works of art, literature, music, and philosophy. Wolf provides three main arguments against this view, which I call The Superfluity Argument, The Explanation of Benefit Argument, and The Welfarist’s Mistake. In this paper, I reconstruct these arguments and explain where, in (...) my view, each goes wrong. (shrink)
Lethal organ donation is a hypothetical procedure in which vital organs are removed from living donors, resulting in their death. An important objection to lethal organ donation is that it would infringe the prohibition on doctors intentionally causing the death of patients. I present a series of arguments intended to undermine this objection. In a case of lethal organ donation, the donor’s death is merely foreseen, and not intended.
Disability or health-related literature has potential to shape public understanding of disability and can also play an important role in medical curricula. However, there appears to be a gap between a health humanities approach which may embrace fictional accounts and a cultural disability studies approach which is deeply sceptical of fiction written by non-disabled authors. This paper seeks to reconcile these perspectives and presents an analysis of the language used by Jonathan Franzen in his description of Parkinson’s disease in the (...) novel The Corrections. We use detailed linguistic analysis, specifically stylistics, to identify the techniques Franzen adopts to represent aspects of impairment and disability. We describe four specific linguistic devices used in the novel: reflector mode, iconicity, body part agency and fragmentation. We show how stylistics offers a unique analytical perspective for understanding representations of disability and impairment. However, we emphasise the need to promote critical and even resistant understandings of such representations and we discuss the potential role of patient/service user input to assess fictional accounts. (shrink)
According to hedonism about well-being, lives can go well or poorly for us just in virtue of our ability to feel pleasure and pain. Hedonism has had many advocates historically, but has relatively few nowadays. This is mainly due to three highly influential objections to it: The Philosophy of Swine, The Experience Machine, and The Resonance Constraint. In this paper, I attempt to revive hedonism. I begin by giving a precise new definition of it. I then argue that the right (...) motivation for it is the ‘experience requirement’ (i.e., that something can benefit or harm a being only if it affects the phenomenology of her experiences in some way). Next, I argue that hedonists should accept a felt-quality theory of pleasure, rather than an attitude-based theory. Finally, I offer new responses to the three objections. Central to my responses are (i) a distinction between experiencing a pleasure (i.e., having some pleasurable phenomenology) and being aware of that pleasure, and (ii) an emphasis on diversity in one’s pleasures. (shrink)
Frege's invention of the predicate calculus has been the most influential event in the history of modern logic. The calculus' place in logic is so central that many philosophers think, in fact, of it when they think of logic. This book challenges the position in contemporary logic and philosophy of language of the predicate calculus claiming that it is based on mistaken assumptions. Ben-Yami shows that the predicate calculus is different from natural language in its fundamental semantic characteristics, primarily in (...) its treatment of reference and quantification, and that as a result the calculus is inadequate for the analysis of the semantics and logic of natural language. Ben-Yami develops both an alternative analysis of the semantics of natural language and an alternative deductive system comparable in its deductive power to first order predicate calculus but more adequate than it for the representation of the logic of natural language. Ben-Yami's book is a challenge to classical first order predicate calculus, casting doubt on many of the central claims of modern logic. (shrink)
Answers to the questions of what justifies conscientious objection in medicine in general and which specific objections should be respected have proven to be elusive. In this paper, I develop a new framework for conscientious objection in medicine that is based on the idea that conscience can express true moral claims. I draw on one of the historical roots, found in Adam Smith’s impartial spectator account, of the idea that an agent’s conscience can determine the correct moral norms, even if (...) the agent’s society has endorsed different norms. In particular, I argue that when a medical professional is reasoning from the standpoint of an impartial spectator, his or her claims of conscience are true, or at least approximate moral truth to the greatest degree possible for creatures like us, and should thus be respected. In addition to providing a justification for conscientious objection in medicine by appealing to the potential truth of the objection, the account advances the debate regarding the integrity and toleration justifications for conscientious objection, since the standard of the impartial spectator specifies the boundaries of legitimate appeals to moral integrity and toleration. The impartial spectator also provides a standpoint of shared deliberation and public reasons, from which a conscientious objector can make their case in terms that other people who adopt this standpoint can and should accept, thus offering a standard fitting to liberal democracies. (shrink)
The daring idea that convention - human decision - lies at the root both of necessary truths and much of empirical science reverberates through twentieth-century philosophy, constituting a revolution comparable to Kant's Copernican revolution. This is the first comprehensive study of Conventionalism. Drawing a distinction between two conventionalist theses, the under-determination of science by empirical fact, and the linguistic account of necessity, Yemima Ben-Menahem traces the evolution of both ideas to their origins in Poincare;'s geometric conventionalism. She argues that the (...) radical extrapolations of Poincare;'s ideas by later thinkers, including Wittgenstein, Quine, and Carnap, eventually led to the decline of conventionalism. This book provides a new perspective on twentieth-century philosophy. Many of the major themes of contemporary philosophy emerge in this book as arising from engagement with the challenge of conventionalism. (shrink)
By any reasonable reckoning, Gottlob Frege's ‘On Sense and Reference’ is one of the more important philosophical papers of all time. Although Frege briefly discusses the sense-reference distinction in an earlier work, it is through ‘Sense and Reference’ that most philosophers have become familiar with it. And the distinction so thoroughly permeates contemporary philosophy of language and mind that it is almost impossible to imagine these subjects without it.The distinction between the sense and the referent of a name is introduced (...) in the second paragraph of ‘Sense and Reference.’. (shrink)
Chalmers suggests that, if a Singularity fails to occur in the next few centuries, the most likely reason will be 'motivational defeaters' i.e. at some point humanity or human-level AI may abandon the effort to create dramatically superhuman artificial general intelligence. Here I explore one plausible way in which that might happen: the deliberate human creation of an 'AI Nanny' with mildly superhuman intelligence and surveillance powers, designed either to forestall Singularity eternally, or to delay the Singularity until humanity more (...) fully understands how to execute a Singularity in a positive way. It is suggested that as technology progresses, humanity may find the creation of an AI Nanny desirable as a means of protecting against the destructive potential of various advanced technologies such as AI, nanotechnology and synthetic biology. (shrink)
In Parts of Classes and "Mathematics is Megethology" David Lewis shows how the ideology of set membership can be dispensed with in favor of parthood and plural quantification. Lewis's theory has it that singletons are mereologically simple and leaves the relationship between a thing and its singleton unexplained. We show how, by exploiting Kit Fine's mereology, we can resolve Lewis's mysteries about the singleton relation and vindicate the claim that a thing is a part of its singleton.
A central question, if not the central question, of philosophy of perception is whether sensory states have a nature similar to thoughts about the world, whether they are essentially representational. According to the content view, at least some of our sensory states are, at their core, representations with contents that are either accurate or inaccurate. Tyler Burge’s Origins of Objectivity is the most sustained and sophisticated defense of the content view to date. His defense of the view is problematic in (...) several ways. The most significant problem is that his approach does not sit well with mainstream perceptual psychology. (shrink)
Ben-Yami presents Wittgenstein’s explicit criticism of the Platonic identification of an explanation with a definition and the alternative forms of explanation he employed. He then discusses a few predecessors of Wittgenstein’s criticisms and the Fregean background against which he wrote. Next, the idea of family resemblance is introduced, and objections answered. Wittgenstein’s endorsement of vagueness and the indeterminacy of sense are presented, as well as the open texture of concepts. Common misunderstandings are addressed along the way. Wittgenstein’s ideas, as is (...) then shown, have far-reaching implications for knowledge of meaning and the nature of logic, and with them to the nature of the philosophical project and its possible achievements. (shrink)
Political theorists have developed and refined the concept of culture through much critical discussion with anthropology. This article will deepen this engagement by claiming that political theory...
The Humean Theory of Reasons, according to which all of our reasons for action are explained by our desires, has been criticized for not being able to account for “moral reasons,” namely, overriding reasons to act on moral demands regardless of one's desires. My aim in this paper is to utilize ideas from Adam Smith's moral philosophy in order to offer a novel and alternative account of moral reasons that is both desire-based and accommodating of an adequate version of the (...) requirement that moral demands have overriding reason-giving force. In particular, I argue that the standpoint of what Smith calls “the impartial spectator” can both determine what is morally appropriate and inappropriate and provide the basis for normative reasons for action—including reasons to act on moral demands—to nearly all reason-responsive agents and, furthermore, that these reasons have the correct weight. The upshot of the proposed account is that it offers an interesting middle road out of a dilemma pertaining to the explanatory and normative dimensions of reasons for informed-desire Humean theorists. (shrink)
What is it for a life to be meaningful? In this article, I defend what I call Consequentialism about Meaning in Life, the view that one's life is meaningful at time t just in case one's surviving at t would be good in some way, and one's life was meaningful considered as a whole just in case the world was made better in some way for one's having existed.
This paper defends an ‘opt-out’ scheme for organ procurement, by distinguishing this system from ‘presumed consent’ (which the author regards as an erroneous justification of it). It, first, stresses the moral importance of increasing the supply of organs and argues that making donation easier need not conflict with altruism. It then goes on to explore one way that donation can be increased, namely by adopting an opt-out system, in which cadaveric organs are used unless the deceased (or their family) registered (...) an objection. Such policies are often labelled ‘presumed consent’, but it is argued that critics are right to be sceptical of this idea—consent is shown to be an action, rather than a mental attitude, and thus not something that can be presumed. Either someone has consented or they have not, whatever their attitude to the use of their organs. Thankfully, an opt-out scheme need not rest on the presumption of consent. Actual consent can be given implicitly, by one's actions, so it is argued that the failure to register an objection (given certain background conditions) should itself be taken as sign of consent. Therefore, it is permissible to use the organs of someone who did not opt out, because they have—by their silence—actually consented. (shrink)
Adaptive preference formation is the unconscious altering of our preferences in light of the options we have available. Jon Elster has argued that this is bad because it undermines our autonomy. I agree, but think that Elster's explanation of why is lacking. So, I draw on a richer account of autonomy to give the following answer. Preferences formed through adaptation are characterized by covert influence (that is, explanations of which an agent herself is necessarily unaware), and covert influence undermines our (...) autonomy because it undermines the extent to which an agent's preferences are ones that she has decided upon for herself. This answer fills the lacuna in Elster's argument. It also allows us to draw a principled distinction between adaptive preference formation and the closely related phenomenon of character planning. (shrink)
Toulmin’s formulation of “analytic arguments” in his 1958 book, The Uses of Argument, is opaque. Commentators have not adequately explicated this formulation, though Toulmin called it a “key” and “crucial” concept for his model of argument macrostructure. Toulmin’s principle “tests” for determining analytic arguments are problematic. Neither the “tautology test” nor the “verification test” straightforwardly indicates whether an argument is analytic or not. As such, Toulmin’s notion of analytic arguments might not represent such a key feature of his model. Absent (...) a clearer formulation of analytic arguments, readers of Toulmin should be hesitant to adopt this terminology. (shrink)
Media studies as a field has traditionally been wary of the question of technology. Discussion of technology has often been restricted to relatively sterile debates about technological determinism. In recent times there has been renewed interest, however, in the technological dimension of media. In part this is doubtless due to rapid changes in media technology, such as the rise of the internet and the digital convergence of media technologies. But there are also an increasing number of writers who seem to (...) believe that media theory, and more widely social science and the humanities, needs to rethink the question of technology and its relationship to society, culture and cultural production. Andrew Feenberg has characterized the two most dominant positions as, on the one hand, the social constructivist or 'technology studies' approach to technology, and, on the other, 'substantivist' theories of technology. The social constructivist approach, aims to counter technological determinism by showing how the development of technology is shaped not by technical and scientific progress but by contingent social, cultural and economic forces. The limitation of this view, however, is that it tends to see technology as no different from any other social process and it may lose the ability to distinguish between the technical object and any other social formation or cultural artefact. Doubtless constructivism would like to see technology as a subset of the cultural artefact and not vice versa, therefore explaining technology in terms of culture and society. But there are powerful arguments for arguing something like the opposite: in other words for understanding culture and society in terms of or as technical objects. In recent years this argument has been put most forcefully by the French philosopher Bernard Stiegler and it is to his work and its implications for media theory that this essay is dedicated. In particular I will show how Stiegler might defend his ideas around technics from the charge of technological determinism. Firstly, technics is not understood in the narrow sense of techno-scientific technology but in the wider sense of all the ways in which the human is exteriorised into artefacts or organized inorganic matter. Technics in this sense is therefore inseparable from culture and society and it makes no sense either to talk of technics determining culture and society or vice versa. Culture and society are not constituted by technics as if by a cause but rather constituted through it. Secondly, a technics in Stiegler's sense does not represent scientific progress or a deterministic evolution; rather, however strange this may seem, technics is a kind of pure accidentality or contingency. For Stiegler it is because of the exteriorisation of the human into artefacts or inorganic organized matter that culture and society constitute themselves contingently. Stiegler's work therefore offers a way for us to rethink the relationship between culture and technology in ways which can productively reconfigure the concerns of critical theory. (shrink)
According to the principle Grice calls 'Modified Occam's Razor' (MOR), 'Senses are not to be multiplied beyond necessity'. More carefully, MOR says that if there are distinct ways in which an expression is regularly used, then, all other things being equal, we should favour the view that the expression is unambiguous and that certain uses of it can be explained in pragmatic terms. In this paper I argue that MOR cannot have the central role that is typically assigned to it (...) by those who deploy it. More specifically, I argue that potential justifications of the epistemic import of parsimony in semantic theorizing are problematic, and that even if MOR could be justified, it has a redundant role to play in adjudicating the debate between the ambiguity-theorist and the proponent of the pragmatic approach. (shrink)
The three most common responses to Taurek’s ‘numbers problem’ are saving the greater number, equal chance lotteries and weighted lotteries. Weighted lotteries have perhaps received the least support, having been criticized by Scanlon What We Owe to Each Other ( 1998 ) and Hirose ‘Fairness in Life and Death Cases’ ( 2007 ). This article considers these objections in turn, and argues that they do not succeed in refuting the fairness of a weighted lottery, which remains a potential solution to (...) cases of conflict. Moreover, it shows how these responses actually lead to a new argument for weighted lotteries, appealing to fairness and Pareto-optimality. (shrink)
Kit Fine and Gideon Rosen propose to define constitutive essence in terms of ground-theoretic notions and some form of consequential essence. But we think that the Fine–Rosen proposal is a mistake. On the Fine–Rosen proposal, constitutive essence ends up including properties that, on the central notion of essence, are necessary but not essential. This is because consequential essence is closed under logical consequence, and the ability of logical consequence to add properties to an object’s consequential essence outstrips the ability of (...) ground-theoretic notions, as used in the Fine–Rosen proposal, to take those properties out. The necessary-but-not-essential properties that, on the Fine–Rosen proposal, end up in constitutive essence include the sorts of necessary-but-not-essential properties that, others have noted, end up in consequential essence. (shrink)
There is significant controversy over whether patients have a ‘right not to know’ information relevant to their health. Some arguments for limiting such a right appeal to potential burdens on others that a patient’s avoidable ignorance might generate. This paper develops this argument by extending it to cases where refusal of relevant information may generate greater demands on a publicly funded healthcare system. In such cases, patients may have an ‘obligation to know’. However, we cannot infer from the fact that (...) a patient has an obligation to know that she does not also have a right not to know. The right not to know is held against medical professionals at a formal institutional level. We have reason to protect patients’ control over the information that they receive, even if in individual instances patients exercise this control in ways that violate obligations. (shrink)
In recent work, Peter Hanks and Scott Soames argue that propositions are types whose tokens are acts, states, or events. Let’s call this view the type view. Hanks and Soames think that one of the virtues of the type view is that it allows them to explain why propositions have semantic properties. But, in this paper, we argue that their explanations aren’t satisfactory. In Section 2, we present the type view. In Section 3, we present one explanation—due to Hanks (2007, (...) 2011) and Soames (2010)—of why propositions have semantic properties. We criticize this first explanation in Section 4. In Section 5, we present another explanation—due to Soames (2104)—of why propositions have semantic properties. We criticize this second explanation in Section 6. (shrink)
Mental Maps.Ben Blumson - 2012 - Philosophy and Phenomenological Research 85 (2):413-434.details
It's often hypothesized that the structure of mental representation is map-like rather than language-like. The possibility arises as a counterexample to the argument from the best explanation of productivity and systematicity to the language of thought hypothesis—the hypothesis that mental structure is compositional and recursive. In this paper, I argue that the analogy with maps does not undermine the argument, because maps and language have the same kind of compositional and recursive structure.