In the case of an intellectually disabled patient, the attending physician was restricted from writing a Do-Not-Resuscitate (DNR) order. Although the rationale for this restriction was to protect the patient from an inappropriate quality of life judgment, it resulted in a worse death than the patient would have experienced had he not been disabled. Such restrictions that are intended to protect intellectually disabled patients may violate their right to equal treatment and to a dignified death.
Tobacco companies have started to position themselves as good corporate citizens. The effort towards CSR engagement in the tobacco industry is not only heavily criticized by anti-tobacco NGOs. Some opponents such as the the World Health Organization have even categorically questioned the possibility of social responsibility in the tobacco industry. The paper will demonstrate that the deep distrust towards tobacco companies is linked to the lethal character of their products and the dubious behavior of their representatives in recent decades. As (...) a result, tobacco companies are not in the CSR business in the strict sense. Key aspects of mainstream CSR theory and practice such as corporate philanthropy, stakeholder collaboration, CSR reporting and self-regulation, are demonstrated to be ineffective or even counterproductive in the tobacco industry. Building upon the terminology used in the leadership literature, the paper proposes to differentiate between transactional and transformational CSR arguing that tobacco companies can only operate on a transactional level. As a consequence, corporate responsibility in the tobacco industry is based upon a much thinner approach to CSR and has to be conceptualized with a focus on transactional integrity across the tobacco supply chain. (shrink)
In their development of causal decision theory, Allan Gibbard and William Harper advocate a particular method for calculating the expected utility of an action, a method based upon the probabilities of certain counterfactuals. Gibbard and Harper then employ their method to support a two-box solution to Newcomb’s paradox. This paper argues against some of Gibbard and Harper’s key claims concerning the truth-values and probabilities of counterfactuals involved in expected utility calculations, thereby disputing their analysis of Newcomb’s Paradox. If we are (...) right, then Gibbard and Harper’s method of calculating expected utility does not adequately represent rational choice. (shrink)
The idea that there is such a thing as Wittgensteinian foundationalism is a provocative one for two reasons. For one thing, Wittgenstein is widely regarded as an anti-foundationalist. For another, the very word `foundationalism' sounds like the name of a theory, and Wittgenstein famously opposed the advancing of theories and theses in philosophy. Nonetheless, in his book Moore and Wittgenstein on Certainty, Avrum Stroll has argued that Wittgenstein does indeed develop a foundationalist view in his final work, On Certainty. On (...) this basis, Stroll goes on to argue against a number of contemporary views, including forms of relativism and scientism. In what follows I will examine what Stroll calls Wittgenstein's foundationalism (in Section 1) and argue that Stroll's reading of Wittgenstein, though original and interesting, is misguided in important ways and so cannot be used against the views he opposes (in Section 2). Finally, in Section 3, I offer a brief summary of the reading of Wittgenstein that I recommend. (shrink)
In this paper I contrast some widespread ideas about what Wittgenstein said about religious belief with statements Wittgenstein made about his purposes and method in doing philosophy, in order to argue that he did not hold the views commonly attributed to him. These allegedly Wittgensteinian doctrines in fact essentialize religion in a very un-Wittgensteinian way. A truly Wittgensteinian philosophy of religion can only be a personal process, and there can be no part in it for generalized hypotheses or conclusions about (...) religion in general. Why is it that in this case I seem to be missing the entire point?1. (shrink)
This paper looks at a dispute decision theory about how best to characterize expected utility maximization and express the logic of rational choice. Where A1, … , An are actions open to some particular agent, and S1, … , Sn are mutually exclusive states of the world such that the agent knows at least one of which obtains, does the logic of rational choice require an agent to consider the conditional probability of choice Ai given that some state Si obtains, (...) Prob(Ai/Si). Or, is the logic of choice better represented by considering the probability of the counterfactual if Ai then Si, Prob(Ai ⟥-> Si). Causal decision theory, developed by Allan Gibbard, William Harper, and David Lewis defend the counterfactual analysis; whereas, Richard Jeffrey and others defend the conditional probability analysis, evidential decision theory. I argue that the problems posed by cases of decision instability favor evidential decision theory. (shrink)
In discussions surrounding epistemology and rationality, it is often useful to assume an agent is rational or ideally rational. Often, this ideal rationality assumption is spelled out along the following lines: -/- 1. The agent believes everything about a situation which the evidence entitles her to believe and nothing which it does not. -/- 2. The agent believes all the logical consequences of any of her beliefs. -/- 3. The agent knows her own mind: if she believes P, she believes (...) that she believes P; and if she doesn't believe P, she believes that she doesn't believe P. -/- 4. The agent believes nothing of the form 'P and it is not the case that P.' -/- 5. If an agent's background belief-set satisfies 1-4 and if rationality requires the agent to add P to her belief-set, then the resulting belief-set will also satisfy 1-4. -/- While individually plausible, there are cases in which holding on to 1-5 generates paradox. Some resolve such paradoxical cases by granting 1-5 but arguing while ideally rational agents can exist, they can't possibly ever find themselves in such a situation: such case descriptions are epistemically incoherent. Others allow that rational agents can coherently find themselves in such odd circumstances, and argue that it's more reasonable to weaken our concept of ideal rationality and give up premise (2) above. However, this strategy has also been rejected. My aim in this paper is to defend the utility of positing an ideally rational agent in such paradoxical circumstances. I argue in such cases we should give up (2), in particular the assumption that (necessarily) if an ideally rational agent believes both P and the conditional, if P then Q, then she believes Q.. What's important is to hold on to the goal of positing ideal rationality: to maximize the amount of true or probably true information a thinker can justifiably believe in a given circumstance. Normally that will mean holding on to (2), but these unusual paradoxical cases are best handled by giving up (2). (shrink)
This paper proposes a view uniformly extending expected utility calculations to both individual and group choice contexts. Three related cases illustrate the problems inherent in applying expected utility to group choices. However, these problems do not essentially depend upon the tact that more than one agent is involved. I devise a modified strategy allowing the application of expected utility calculations to these otherwise problematic cases. One case, however, apparently leads to contradiction. But recognizing the falsity of proposition (1) below allows (...) the resolution of the contradiction, and also allows my modified strategy to resolve otherwise paradoxical cases of group choice such as the Prisoners' Dilemma: -/- (1) lf an agent x knows options A and B are both available, and x knows that were he to do A he would be better off (in every respect) than were he to do B, then doing A is more rational for x than doing B. (shrink)
This article gives an overview about the ethical dispute on preimplantation genetic diagnosis (PGD), its legal status and its practical usage in Europe. We provide a detailed description of the situation in Germany wherein prenatal diagnosis is routinely applied, but PGD is prohibited on the basis of the internationally unique embryo protection act (EPA) that was put into force in 1991. Both PGD and stem cell research were vigorously debated in Germany during the last four years. As regards the PGD (...) debate specifically, the voices of the ones directly affected were not adequately taken into consideration. We describe the predominant lines of argumentation in this debate and some essential results of our "bioethical field study" of opinions on and usage of PGD in Germany and their implications for the German legislation and ethical theory. (shrink)
Since existentialism lost its influence in philosophy in the 1960s, postmodern theory has taken over criticizing basic concepts of western thought. From a postmodern point of view, the main shortcomings of existentialism is that it criticizes traditional unitarian concepts, while re-inventing new unitarian models. Against these unitarian approaches postmodernism holds that the world can only be described in terms of difference. In this article the postmodern program and its differences from existentialism are explained in reference to three concepts of western (...) philosophy: subject, truth, and ethics. Applying these concepts, the relevance of postmodernism for medical theory is illustrated. (shrink)
In the debate regarding the different possibilities for gene therapy, it is presupposed that the manipulations are limited to the nuclear genome (nDNA). Given recent advances in genetics, mitochondrial genome (mtDNA) and diseases must be considered as well. In this paper, we propose a three dimensional framework for the ethical debate of gene therapy where we add the genomic type (nDNA vs. mtDNA) as a third dimension to be considered beside the paradigmatic dimensions of target cell (somatic vs. germ-line) and (...) purpose (therapeutic vs. enhancement). Somatic gene therapy can be viewed today as generally accepted, and we review t he contemporary arguments surrounding it on the basis of bioethical-pragmatic, socio-political and deontological classifications. Many of the supposed ethical questions of somatic gene therapy today are not new; they are well-known issues of research ethics. We also critically summarize the different international perspectives and the German ethical discussion regarding manipulations of germ-line cells. (shrink)
We look at the problem of revising fuzzy belief bases, i.e., belief base revision in which both formulas in the base as well as revision-input formulas can come attached with varying degrees. Working within a very general framework for fuzzy logic which is able to capture certain types of uncertainty calculi as well as truth-functional fuzzy logics, we show how the idea of rational change from “crisp” base revision, as embodied by the idea of partial meet (base) revision, can be (...) faithfully extended to revising fuzzy belief bases. We present and axiomatise an operation of partial meet fuzzy base revision and illustrate how the operation works in several important special instances of the framework. We also axiomatise the related operation of partial meet fuzzy base contraction. (shrink)
In this paper I consider Kenneth Schaffner''s(1998) rendition of ''''developmentalism'''' from the point of viewof bacteriophage biology. I argue that the fact that a viablephage can be produced from purified DNA and host cellularcomponents lends some support to the anti-developmentalist, ifthey first show that one can draw a principled distinctionbetween genetic and environmental effects. The existence ofhost-controlled phage host range restriction supports thedevelopmentalist''s insistence on the parity of DNA andenvironment. However, in the case of bacteriophage, thedevelopmentalist stands on less (...) firm ground than when organismswith nervous systems, such as Schaffner''s C. elegans, areconsidered. (shrink)
Over the last few decades there has been a strong narrative turn within the humanities and social sciences in general and educational studies in particular. Especially Jerome Bruner’s theory of narrative as a specific ‘mode of knowing’ was very important for this growing body of work. To understand how the narrative mode works Bruner proposes to study narratives ‘at their far reach’—as an art form—and on several occasions he refers to the dramatistic pentad as an important method for ‘unpacking’ narratives. (...) The pentad proposed by Bruner to study narratives was developed by the American philosopher and rhetorician Kenneth Burke and is embedded in his general linguistic theory of dramatism. From an educational perspective Bruner’s reference to the work of Burke has not been elaborated upon thus far. In this paper we aim to take Bruner’s suggestion at hand and explore how his educational theory of narrative as a mode of knowing can indeed be enriched by Kenneth Burke’s theory and method of dramatism. We claim that specifically the rhetorical framework that is developed by dramatism offers an important perspective about perspectives for education in a context that is increasingly confronted with a plurality of interpretive frameworks. (shrink)
Cet article esquisse un rapprochement entre un courant de pensée politique, le néoréalisme, et une méthode en sciences humaines, le structuralisme. Ce courant et cette méthode ont suivi des trajectoires séparées, de l’après-guerre à la fin des années soixante-dix, jusqu’à ce que Kenneth Waltz croise ces deux problématiques. Après avoir défini respectivement réalisme et structuralisme, cet article établit leur connexion et tente d’éclairer les raisons pour lesquelles ce rapprochement n’avait pas été conduit jusqu’alors.
In The Bounds of Cognition, Fred Adams and Kenneth Aizawa treat the arguments for extended cognition to withering criticism. I summarize their main arguments and focus special attention on their distinction between the extended cognitive system hypothesis and the extended cognition hypothesis, as well as on their demand for a mark of the mental.
In his discussion of results which I (with Michael Hayward) recently reported in this journal, Kenneth Aizawa takes issue with two of our conclusions, which are: (a) that our connectionist model provides a basis for explaining systematicity within the realm of sentence comprehension, and subject to a limited range of syntax (b) that the model does not employ structure-sensitive processing, and that this is clearly true in the early stages of the network''s training. Ultimately, Aizawa rejects both (a) and (...) (b) for reasons which I think are ill-founded. In what follows, I offer a defense of our position. In particular, I argue (1) that Aizawa adopts a standard of explanation that many accepted scientific explanations could not meet, and (2) that Aizawa misconstrues the relevant meaning of structure-sensitive process. (shrink)
In a career of over seventy years, Kenneth Burke has produced a body of challenging and fascinating theoretical work. This work has had a bigger reputation than it has had a readership. Burke has been hailed not only as a strong precursor of the work of Fredric Jameson, Frank Lentriccia, and others, but also as a powerful original thinker whose writings have yet to be grappled with. Kenneth Burke: Rhetoric and Ideology is a lucid and accessible introduction to (...) a major twentieth-century thinker whose ideas have influenced fields as diverse as literary criticism, philosophy, linguistics, politics, anthropology and sociology. Stephen Bygrade explores the content of Burke's vast output, theorizing the cultural and philosophical implications of his work. Bygrave's rigorous arguments focus around Burke's preoccupation with the relationship between language, ideology, and action. This book traces Burke's "rhetorical strategies" and argues that they form a bridge between "action" and "symbolic action." By considering Burke as a reader and writer of narratives and systems, Bygrade examines the inadequacies of earlier readings of Burke and enfolds his thought within current debates on Anglo-American cultural theory. By reinstating Burke into contemporary cultural theory, this book offers a way of reading his ideas, as well as introducing students of literature and cultural studies to the range of ideas found in his work. (shrink)
Our programmatic article on Homo heuristicus (Gigerenzer & Brighton, 2009) included a methodological section specifying three minimum criteria for testing heuristics: competitive tests, individual-level tests, and tests of adaptive selection of heuristics. Using Richter and Späth’s (2006) study on the recognition heuristic, we illustrated how violations of these criteria can lead to unsupported conclusions. In their comment, Hilbig and Richter conduct a reanalysis, but again without competitive testing. They neither test nor specify the compensatory model of inference they (...) argue for. Instead, they test whether participants use the recognition heuristic in an unrealistic 100% (or 96%) of cases, report that only some people exhibit this level of consistency, and conclude that most people would follow a compensatory strategy. We know of no model of judgment that predicts 96% correctly. The curious methodological practice of adopting an unrealistic measure of success to argue against a competing model, and to interpret such a finding as a triumph for a preferred but unspecified model, can only hinder progress. Marewski, Gaissmaier, Schooler, Goldstein, and Gigerenzer (2010), in contrast, specified five compensatory models, compared them with the recognition heuristic, and found that the recognition heuristic predicted inferences most accurately. (shrink)