Background: It is often claimed that a regulated kidney market would significantly reduce the kidney shortage, thus saving or improving many lives. Data are lacking, however, on how many people would consider selling a kidney in such a market. -/- Methods: A survey instrument, developed to assess behavioural dispositions to and attitudes about a hypothetical regulated kidney market, was given to Swiss third-year medical students. -/- Results: Respondents’ (n = 178) median age was 23 years. Their socioeconomic status was high (...) or middle (94.6%). 48 (27%) considered selling a kidney in a regulated kidney market, of whom 31 (66%) would sell only to overcome a particularly difficult financial situation. High social status and male gender was the strongest predictor of a disposition to sell. 32 of all respondents (18%) supported legalising a regulated kidney market. This attitude was not associated with a disposition to sell a kidney. 5 respondents (2.8%) endorsed a market and considered providing a kidney to a stranger if and only if paid. 4 of those 5 would sell only under financial duress. -/- Conclusions: Current understanding of a regulated kidney market is insufficient. It is unclear whether a regulated market would result in a net gain of kidneys. Most possible kidney vendors would only sell in a particularly difficult financial situation, raising concerns about the validity of consent and inequities in the provision of organs. Further empirical and normative analysis of these issues is required. Any calls to implement and evaluate a regulated kidney market in pilot studies are therefore premature. (shrink)
The late 20th century saw great movement in the philosophy of language, often critical of the fathers of the subject-Gottlieb Frege and Bertrand Russell-but sometimes supportive of (or even defensive about) the work of the fathers. Howard Wettstein's sympathies lie with the critics. But he says that they have often misconceived their critical project, treating it in ways that are technically focused and that miss the deeper implications of their revolutionary challenge. Wettstein argues that Wittgenstein-a figure with whom (...) the critics of Frege and Russell are typically unsympathetic-laid the foundation for much of what is really revolutionary in this late 20th century movement. The subject itself should be of great interest, since philosophy of language has functioned as a kind of foundation for much of 20th century philosophy. But in fact it remains a subject for specialists, since the ideas are difficult and the mode of presentation is often fairly technical. In this book, Wettstein brings the non-specialist into the conversation (especially in early chapters); he also reconceives the debate in a way that avoids technical formulation. The Magic Prism is intended for professional philosophers, graduate students, and upper division undergraduates. (shrink)
In this volume of essays, Howard Wettstein explores the foundations of religious commitment. His orientation is broadly naturalistic, but not in the mode of reductionism or eliminativism. This collection explores questions of broad religious interest, but does so through a focus on the author's religious tradition, Judaism. Among the issues explored are the nature and role of awe, ritual, doctrine, religious experience; the distinction between belief and faith; problems of evil and suffering with special attention to the Book of (...) Job and to the Akedah, the biblical story of the binding of Isaac; the virtue of forgiveness. One of the book's highlights is its literary approach to theology that at the same time makes room for philosophical exploration of religion. Another is Wettstein's rejection of the usual picture that sees religious life as sitting atop a distinctive metaphysical foundation, one that stands in need of epistemological justification. (shrink)
This essay explains and criticizes Gentile's attempts to connect his metaphysical theories with his ideas about education, and especially the relationship between education and nationalism. It begins with a critical examination of the distinguishing features of the view Gentile specifies in Theory of Mind as Pure Act. Vincent then considers Gentile's account of how this theory, for which mind is an act of perpetual self-creation, leads to a conception of education with an explicitly nationalist bent. His attempts to connect (...) these are ultimately unsuccessful, argues Vincent; actual idealism does not give rise to any specific political order, and certainly not the kind of state-led nationalism that Gentile ultimately supported. (shrink)
In his controversial new book, Andrew Vincent offers a comprehensive, synoptic, and comparative analysis of the major conceptions of political theory throughout the twentieth century. The book challenges established views of contemporary political theory and provides critical perspectives on the future of the subject. It will be an indispensable resource for all scholars and students of the discipline.
This anthology of essays on the work of David Kaplan, a leading contemporary philosopher of language, sprang from a conference, "Themes from Kaplan," organized by the Center for the Study of Language and Information at Stanford University.
A distinction is developed between two uses of definite descriptions, the "attributive" and the "referential." the distinction exists even in the same sentence. several criteria are given for making the distinction. it is suggested that both russell's and strawson's theories fail to deal with this distinction, although some of the things russell says about genuine proper names can be said about the referential use of definite descriptions. it is argued that the presupposition or implication that something fits the description, present (...) in both uses, has a different genesis depending upon whether the description is used referentially or attributively. this distinction in use seems not to depend upon any syntactic or semantic ambiguity. it is also suggested that there is a distinction between what is here called "referring" and what russell defines as denoting. definite descriptions may denote something, according to his definition, whether used attributively or referentially. (shrink)
Garrath Williams claims that truly responsible people must possess a “capacity … to respond [appropriately] to normative demands” (2008:462). However, there are people whom we would normally praise for their responsibility despite the fact that they do not yet possess such a capacity (e.g. consistently well-behaved young children), and others who have such capacity but who are still patently irresponsible (e.g. some badly-behaved adults). Thus, I argue that to qualify for the accolade “a responsible person” one need not possess such (...) a capacity, but only to be earnestly willing to do the right thing and to have a history that testifies to this willingness. Although we may have good reasons to prefer to have such a capacity ourselves, and to associate ourselves with others who have it, at a conceptual level I do not think that such considerations support the claim that having this capacity is a necessary condition of being a responsible person in the virtue sense. (shrink)
Luck egalitarians think that considerations of responsibility can excuse departures from strict equality. However critics argue that allowing responsibility to play this role has objectionably harsh consequences. Luck egalitarians usually respond either by explaining why that harshness is not excessive, or by identifying allegedly legitimate exclusions from the default responsibility-tracking rule to tone down that harshness. And in response, critics respectively deny that this harshness is not excessive, or they argue that those exclusions would be ineffective or lacking in justification. (...) Rather than taking sides, after criticizing both positions I also argue that this way of carrying on the debate – i.e. as a debate about whether the harsh demands of responsibility outweigh other considerations, and about whether exclusions to responsibility-tracking would be effective and/or justified – is deeply problematic. On my account, the demands of responsibility do not – in fact, they can not – conflict with the demands of other normative considerations, because responsibility only provides a formal structure within which those other considerations determine how people may be treated, but it does not generate its own practical demands. (shrink)
Recent years have heralded increasing attention to the role of multinational corporations in regard to human rights violations. The concept of complicity has been of particular interest in this regard. This article explores the conceptual differences between silent complicity in particular and other, more "conventional" forms of complicity. Despite their far-reaching normative implications, these differences are often overlooked.Rather than being connected to specific actions as is the case for other forms of complicity, the concept of silent complicity is tied to (...) the identity, or the moral stature of the accomplice. More specifically, it helps us expose multinational corporations in positions of political authority. Political authority breeds political responsibility.Thus, corporate responsibility in regard to human rights may go beyond "doing no harm" and include apositive obligation to protect. Making sense of this duty leads to a discussion of the scope and limits of legitimate human rights advocacy by corporations. (shrink)
Could neuroimaging evidence help us to assess the degree of a person’s responsibility for a crime which we know that they committed? This essay defends an affirmative answer to this question. A range of standard objections to this high-tech approach to assessing people’s responsibility is considered and then set aside, but I also bring to light and then reject a novel objection—an objection which is only encountered when functional (rather than structural) neuroimaging is used to assess people’s responsibility.
Egalitarians must address two questions: i. What should there be an equality of, which concerns the currency of the ‘equalisandum’; and ii. How should this thing be allocated to achieve the so-called equal distribution? A plausible initial composite answer to these two questions is that resources should be allocated in accordance with choice, because this way the resulting distribution of the said equalisandum will ‘track responsibility’ — responsibility will be tracked in the sense that only we will be responsible for (...) the resources that are available to us, since our allocation of resources will be a consequence of our own choices. But the effects of actual choices should not be preserved until the prior effects of luck in constitution and circumstance are first eliminated. For instance, people can choose badly because their choice-making capacity was compromised due to a lack of intelligence (i.e. due to constitutional bad luck), or because only bad options were open to them (i.e. due to circumstantial bad luck), and under such conditions we are not responsible for our choices. So perhaps a better composite answer to our two questions (from the perspective of tracking responsibility) might be that resources should be allocated so as to reflect people’s choices, but only once those choices have been corrected for the distorting effects of constitutional and circumstantial luck, and on this account choice preservation and luck elimination are two complementary aims of the egalitarian ideal. Nevertheless, it is one thing to say that luck’s effects should be eliminated, but quite another to figure out just how much resource redistribution would be required to achieve this outcome, and so it was precisely for this purpose that in 1981 Ronald Dworkin developed the ingenuous hypothetical insurance market argumentative device (HIMAD), which he then used in conjunction with the talent slavery (TS) argument, to arrive at an estimate of the amount of redistribution that would be required to reduce the extent of luck’s effects. However recently Daniel Markovits has cast doubt over Dworkin’s estimates of the amount of redistribution that would be required, by pointing out flaws with his understanding of how the hypothetical insurance market would function. Nevertheless, Markovits patched it up and he used this patched-up version of Dworkin’s HIMAD together with his own version of the TS argument to reach his own conservative estimate of how much redistribution there ought to be in an egalitarian society. Notably though, on Markovits’ account once the HIMAD is patched-up and properly understood, the TS argument will also allegedly show that the two aims of egalitarianism are not necessarily complementary, but rather that they can actually compete with one another. According to his own ‘equal-agent’ egalitarian theory, the aim of choice preservation is more important than the aim of luck elimination, and so he alleges that when the latter aim comes into conflict with the former aim then the latter will need to be sacrificed to ensure that people are not subordinated to one another as agents. I believe that Markovits’ critique of Dworkin is spot on, but I also think that his own positive thesis — and hence his conclusion about how much redistribution there ought to be in an egalitarian society — is flawed. Hence, this paper will begin in Section I by explaining how Dworkin uses the HIMAD and his TS argument to estimate the amount of redistribution that there ought to be in an egalitarian society — this section will be largely expository in content. Markovits’ critique of Dworkin will then be outlined in Section II, as will be his own positive thesis. My critique of Markovits, and my own positive thesis, will then make a fleeting appearance in Section III. Finally, I will conclude by rejecting both Dworkin’s and Markovits’ estimates of the amount of redistribution that there ought to be in an egalitarian society, and by reaffirming the responsibility-tracking egalitarian claim that choice preservation and luck elimination are complementary and not competing egalitarian aims. (shrink)
The way in which we characterize the structural and functional differences between psychopath and normal brains – either as biological disorders or as mere biological differences – can influence our judgments about psychopaths’ responsibility for criminal misconduct. However, Marga Reimer (Neuroethics 1(2):14, 2008) points out that whether our characterization of these differences should be allowed to affect our judgments in this manner “is a difficult and important question that really needs to be addressed before policies regarding responsibility... can be implemented (...) with any confidence”. This paper is an attempt to address Reimer’s difficult and important question; I argue that irrespective of which of these two characterizations is chosen, our judgments about psychopaths’ responsibility should not be affected, because responsibility hinges not on whether a particular difference is (referred to as) a disorder or not, but on how that difference affects the mental capacities required for moral agency. (shrink)
In "Torts, Egalitarianism and Distributive Justice" , Tsachi Keren-Paz presents impressingly detailed analysis that bolsters the case in favour of incremental tort law reform. However, although this book's greatest strength is the depth of analysis offered, at the same time supporters of radical law reform proposals may interpret the complexity of the solution that is offered as conclusive proof that tort law can only take adequate account of egalitarian aims at an unacceptably high cost.
Neoliberal globalization has not yielded the results it promised; global inequality has risen, poverty and hunger are still prevailing in large parts of this world. If this devastating situation shall be improved, economists must talk less about economic growth and more about people’s rights. The use of the language of rights will be key for making the economy work more in favor of the least advantaged in this world. Not only will it provide us with the vocabulary necessary to reframe (...) such pressing global problems and to find adequate economic solutions; it will also deliver the basis for deriving according duties and duty-bearers – the language of rights is congruent with the language of justice and as such it is inevitably and at the same time the language of obligations. The language of obligations exposes the multinational corporation as one of the main agents of justice in the global economy. Taking distributive justice as a starting point for reflection, a consistent derivation of the multinational’s moral obligations must focus on capabilities rather than on causality. This will lead to a shift from merely passive to active duties and accordingly to a stronger emphasis on the corporation’s contribution to the realization of positive rights. (shrink)
This thesis considers two allegations which conservatives often level at no-fault systems — namely, that responsibility is abnegated under no-fault systems, and that no-fault systems under- and over-compensate. I argue that although each of these allegations can be satisfactorily met – the responsibility allegation rests on the mistaken assumption that to properly take responsibility for our actions we must accept liability for those losses for which we are causally responsible; and the compensation allegation rests on the mistaken assumption that tort (...) law’s compensatory decisions provide a legitimate norm against which no-fault’s decisions can be compared and criticized – doing so leads in a direction which is at odds with accident law reform advocates’ typical recommendations. On my account, accident law should not just be reformed in line with no-fault’s principles, but rather it should be completely abandoned since the principles that protect no- fault systems from the conservatives’ two allegations are incompatible with retaining the category of accident law, they entail that no-fault systems are a form of social welfare and not accident law systems, and that under these systems serious deprivation – and to a lesser extent causal responsibility – should be conditions of eligibility to claim benefits. (shrink)
This is a report on the 3-day workshop “The Neuroscience of Responsibility” that was held in the Philosophy Department at Delft University of Technology in The Netherlands during February 11th–13th, 2010. The workshop had 25 participants from The Netherlands, Germany, Italy, UK, USA, Canada and Australia, with expertise in philosophy, neuroscience, psychology, psychiatry and law. Its aim was to identify current trends in neurolaw research related specifically to the topic of responsibility, and to foster international collaborative research on this topic. (...) The workshop agenda was constructed by the participants at the start of each day by surveying the topics of greatest interest and relevance to participants. In what follows, we summarize (1) the questions which participants identified as most important for future research in this field, (2) the most prominent themes that emerged from the discussions, and (3) the two main international collaborative research project plans that came out of this meeting. (shrink)
It could be argued that tort law is failing, and arguably an example of this failure is the recent public liability and insurance (‘PL&I’) crisis. A number of solutions have been proposed, but ultimately the chosen solution should address whatever we take to be the cause of this failure. On one account, the PL&I crisis is a result of an unwarranted expansion of the scope of tort law. Proponents of this position sometimes argue that the duty of care owed by (...) defendants to plaintiffs has expanded beyond reasonable levels, such that parties who were not really responsible for another’s misfortune are successfully sued, while those who really were to blame get away without taking any responsibility. However people should take responsibility for their actions, and the only likely consequence of allowing them to shirk it is that they and others will be less likely to exercise due care in the future, since the deterrents of liability and of no compensation for accidentally self-imposed losses will not be there. Others also argue that this expansion is not warranted because it is inappropriately fueled by ‘deep pocket’ considerations rather than by considerations of fault. They argue that the presence of liability insurance sways the judiciary to award damages against defendants since they know that insurers, and not the defendant personally, will pay for it in the end anyway. But although it may seem that no real person has to bear these burdens when they are imposed onto insurers, in reality all of society bears them collectively when insurers are forced to hike their premiums to cover these increasing damages payments. In any case, it seems unfair to force insurers to cover these costs simply because they can afford to do so. If such an expansion is indeed the cause of the PL&I crisis, then a contraction of the scope of tort liability, and a pious return to the fault principle, might remedy the situation. However it could also be argued that inadequate deterrence is the cause of this crisis. On this account the problem would lie not with the tort system’s continued unwarranted expansion, but in the fact that defendants really have been too careless. If prospective injurers were appropriately deterred from engaging in unnecessarily risky activities, then fewer accidents would ever occur in the first place, and this would reduce the need for litigation at its very source. If we take this to be the cause of tort law’s failure then our solution should aim to improve deterrence. Glen Robinson has argued that improved deterrence could be achieved if plaintiffs were allowed to sue defendants for wrongful exposure to ongoing risks of future harm, even in the absence of currently materialized losses. He argues that at least in toxic injury type cases the tortious creation of risk [should be seen as] an appropriate basis of liability, with damages being assessed according to the value of the risk, as an alternative to forcing risk victims to abide the outcome of the event and seek damages only if and when harm materializes. In a sense, Robinson wishes to treat newly-acquired wrongful risks as de-facto wrongful losses, and these are what would be compensated in liability for risk creation (‘LFRC’) cases. Robinson argues that if the extent of damages were fixed to the extent of risk exposure, all detected unreasonable risk creators would be forced to bear the costs of their activities, rather than only those who could be found responsible for another’s injuries ‘on the balance of probabilities’. The incidence of accidents should decrease as a result of improved deterrence, reduce the ‘suing fest’, and so resolve the PL&I crisis. So whilst the first solution involves contracting the scope of tort liability, Robinson’s solution involves an expansion of its scope. However Robinson acknowledges that LFRC seems prima facie incompatible with current tort principles which in the least require the presence of plaintiff losses, defendant fault, and causation to be established before making defendants liable for plaintiffs’ compensation. Since losses would be absent in LFRC cases by definition, the first evidentiary requirement would always be frustrated, and in its absence proof of defendant fault and causation would also seem scant. If such an expansion of tort liability were not supported by current tort principles then it would be no better than proposals to switch accident law across to no-fault, since both solutions would require comprehensive legal reform. However Robinson argues that the above three evidentiary requirements could be met in LFRC cases to the same extent that they are met in other currently accepted cases, and hence that his solution would therefore be preferable to no-fault solutions as it would only require incremental but not comprehensive legal reform. Although I believe that actual losses should be present before allowing plaintiffs to seek compensation, I will not present a positive argument for this conclusion. My aim in this paper is not to debate the relative merits of Robinson’s solution as compared to no-fault solutions, nor to determine which account of the cause of the PL&I crisis is closer to the truth, but rather to find out whether Robinson’s solution would indeed require less radical legal reform than, for example, proposed no-fault solutions. I will argue that Robinson fails to show that current tort principles would support his proposed solution, and hence that his solution is at best on an even footing with no-fault solutions since both would require comprehensive legal reform. (shrink)
Charles Griswold’s seminal work, Forgiveness, is the focus of the present essay. Following Griswold, I distinguish the relevant virtue of character from something that is more like an act or process. The paper discusses a number of hesitations I have about Griswold’s analysis, at the level both of detail and of underlying conception.
The nature of reference, or the relation of a word to the object to which it refers, has been perhaps the dominant concern of twentieth-century analytic philosophy. Extremely influential arguments by Gottlob Frege around the turn of the century convinced the large majority of philosophers that the meaning of a word must be distinguished from its referent, the former only providing some kind of direction for reaching the latter. In the last twenty years, this Fregean orthodoxy has been vigorously challenged (...) by those who argue that certain important kinds of words, at least, refer directly without need of an intermediate meaning or sense. The essays in this volume record how a long-term study of Frege has persuaded the author that Frege's pivotal distinction between sense and reference, and his attendant philosophical views about language and thought, are unsatisfactory. Frege's perspective, he argues, imposes a distinctive way of thinking about semantics, specifically about the centrality of cognitive significance puzzles for semantics. Freed from Frege's perspective, we will no longer find it natural to think about semantics in this way. (shrink)
Third-party property insurance (TPPI) protects insured drivers who accidentally damage an expensive car from the threat of financial ruin. Perhaps more importantly though, TPPI also protects the victims whose losses might otherwise go uncompensated. Ought responsible drivers therefore take out TPPI? This paper begins by enumerating some reasons for why a rational person might believe that they have a moral obligation to take out TPPI. It will be argued that if what is at stake in taking responsibility is the ability (...) to compensate our possible future victims for their losses, then it might initially seem that most people should be thankful for the availability of relatively inexpensive TPPI because without it they may not have sufficient funds to do the right thing and compensate their victims in the event of an accident. But is the ability to compensate one's victims really what is at stake in taking responsibility? The second part of this paper will critically examine the arguments for the above position, and it will argue that these arguments do not support the conclusion that injurers should compensate their victims for their losses, and hence that drivers need not take out TPPI in order to be responsible. Further still, even if these arguments did support the conclusion that injurers should compensate their victims for their losses, then (perhaps surprisingly) nobody should to be allowed to take out TPPI because doing so would frustrate justice. (shrink)
New concepts may prove necessary to profit from the avalanche of sequence data on the genome, transcriptome, proteome and interactome and to relate this information to cell physiology. Here, we focus on the concept of large activity-based structures, or hyperstructures, in which a variety of types of molecules are brought together to perform a function. We review the evidence for the existence of hyperstructures responsible for the initiation of DNA replication, the sequestration of newly replicated origins of replication, cell division (...) and for metabolism. The processes responsible for hyperstructure formation include changes in enzyme affinities due to metabolite-induction, lipid-protein affinities, elevated local concentrations of proteins and their binding sites on DNA and RNA, and transertion. Experimental techniques exist that can be used to study hyperstructures and we review some of the ones less familiar to biologists. Finally, we speculate on how a variety of in silico approaches involving cellular automata and multi-agent systems could be combined to develop new concepts in the form of an Integrated cell (I-cell) which would undergo selection for growth and survival in a world of artificial microbiology. (shrink)
It has long been urged against traditional theism, very long indeed, that God’s perfections—specifically in the domains of goodness, knowledge and power—are logically incompatible with the existence of unwarranted human suffering. It has almost equally long been urged that the problem is illusory—or at least surmountable; the tradition of theodicy must be only moments younger than the problem. The debate is a philosophical classic, with many ingenious moves on both sides, and epicycles galore. But whatever one’s view on the details (...) of the debate, it is difficult—and I think unwise—to resist the sense that evil presents a real and indeed substantial problem for the Western religious tradition. (shrink)
For living beings, information is as fundamental as matter or energy. In this paper we show: a) inadequacies of quantitative theories of information, b) how a qualitative analysis leads to a classification of information systems and to a modelling of intercellular communication.From a quantitative point of view, the application in biology of information theories borrowed from communication techniques proved to be disappointing. These theories ignore deliberately the significance of messages, and do not give any definition of information. They refer to (...) quantities, based upon arbitrarily defined probabilistic events. Probability is subjective. The receiver of the message needs to have meta-knowledge of the events. The quantity of information depends on language, coding, and arbitrary definition of disorder. The suggested objectivity is fallacious. (shrink)
Nationalism has had a complex relation with the discipline of political theory during the 20th century. Political theory has often been deeply uneasy with nationalism in relation to its role in the events leading up to and during the Second World War. Many theorists saw nationalism as an overly narrow and potentially irrationalist doctrine. In essence it embodied a closed vision of the world. This article focuses on one key contributor to the immediate post-war debate—Karl Popper—who retained deep misgivings about (...) nationalism until the end of his life, and indeed saw the events of the early 1990s (shortly before his death) as a confirmation of this distrust. Popper was one of a number of immediate post war writers, such as Friedrich Hayek and Ludwig von Mises, who shared this unease with nationalism. They all had a powerful effect on social and political thought in the English-speaking world. Popper particularly articulated a deeply influential perspective that fortuitously encapsulated a cold war mentality in the 1950s. In 2005 Popper's critical views are doubly interesting, since the last decade has seen a renaissance of nationalist interests. The collapse of the Berlin wall in 1989, and the changing political landscape of international and domestic politics, has seen once again a massive growth of interest in nationalism, particularly from liberal political theorists and a growing, and, at times, immensely enthusiastic academic literature, trying to provide a distinctively benign benediction to nationalism. (shrink)
I argue that theological doctrine, the output of philosophical theology, is not a natural tool for thinking about biblical/rabbinic Judaism. Fundamental to my argument is the claim that there is a tension between constellations of theological doctrine of medieval vintage and the primary religious literature---the Hebrew Bible as understood through, and supplemented by, the Rabbis of the Talmud. This tension is a product of the genesis of philosophical theology, the application of Greek philosophical thought to a very different tradition, one (...) that emerged from a very different world. (shrink)
Contemporary semantical discussions make mention of the traditional approach to semantics represented by Frege and/or Russell--even sometimes by Frege-Russell. Is there a Frege-Russell view in the philosophy of language? How much of a common semantical perspective did Frege and Russell share? The matter bears exploration. I begin with Frege and Russell on propositions.