miembro del Future of Humanity Institute de la Universidad de Oxford y experto en mejoramiento humano y transhumanismo, sobre cuestiones centrales de su labor investigadora.PALABRAS CLAVETRANSHUMANISMO, MEJORAMIENTO HUMANO, ANDERS SANDBERG, BIOTECNOLOGÍAABSTRACTInterview with Anders Sandberg, member of the Future of Humanity Institute at Oxford University and expert in human enhancement and transhumanism, about central topics in his works.KEYWORDSTRANSHUMANISM, HUMAN ENHANCEMENT, ANDERS SANDBERG,BIOTECHNOLOGY.
Interview with Anders Sandberg, member of the Future of Humanity Institute at Oxford University and expert in human enhancement and transhumanism, about central topics in his works.
In _Plato on Democracy and Political technē_ Anders Dahl Sørensen offers an in-depth investigation of Plato’s discussions of democracy’s ‘epistemic potential’, arguing that this question is far more central to his political thought than is usually assumed.
Humanistic theory for more than the past 100 years is marked by extensive attention to practice and practices. Two prominent streams of thought sharing this focus are pragmatism and theories of practice. This volume brings together internationally prominent theorists to explore key dimensions of practice and practices on the background of parallels and points of contact between these two traditions. The contributors all are steeped in one or both of these streams and well-known for their work on practice. The collected (...) essays explore three important themes: what practice and practices are, normativity, and transformation. The volume deepens understanding of these three practice themes while strengthening appreciation of the parallels between and complementariness of pragmatism and practice theory. (shrink)
This book discusses Gadamer's theory of context-dependence. Analytical and partly critical, the book also shows exegetical accuracy in the rendering of Gadamer's position. It explores the following questions that Gadamer's theory of context-dependence tries to answer: in what way is thought influenced by and thus dependent on its historical context? To what extent and in what way is the individual able to become reflectively aware of and emancipate himself from this dependence? The book takes Gadamer's wide interests into account, e.g. (...) issues relating to the history of historiography and the nature of art and aesthetic experience. The problem of the context-dependence of thought is prominent in contemporary philosophy, including the fields of structuralism, post structuralism, deconstruction, certain forms of feminist philosophy and the philosophy of science. In this sense, the book discusses an issue with wide repercussions. (shrink)
The orthodox view of proper names, Millianism, provides a very simple and elegant explanation of the semantic contribution of referential uses of names–names that occur as bare singulars and as the argument of a predicate. However, one problem for Millianism is that it cannot explain the semantic contribution of predicative uses of names. In recent years, an alternative view, so-called the-predicativism, has become increasingly popular. According to the-predicativists, names are uniformly count nouns. This straightforwardly explains why names can be used (...) predicatively, but is prima facie less congenial to an analysis of referential uses. To address this issue, the-predicativists argue that referential names are in fact complex determiner phrases consisting of a covert definite determiner and a count noun—and so, a referential name is a definite description. In this paper, I will argue that despite the appearance of increased theoretical complexity, the view that names are ambiguous between predicative and referential types is in fact superior to the unitary the-predicativist view. However, I will also argue that to see why this ambiguity view is better, we need to give up the standard Millian analysis. Consequently, I will first propose an alternative analysis of referential names that retains the virtues of Millianism, but provides an important explanatory connection to the predicative uses. Once this analysis of names is adopted, the explanation for why names are systematically ambiguous between referential and predicative types is both simple and elegant. Second, I will argue that the-predicativism has the appearance of being simpler than an ambiguity view, but is in fact unable to account for certain key properties of referential names without making ad hoc stipulations. (shrink)
This collection of essays provides an exemplary overview of the diversity and relevance of current scholarship on German Idealism. The importance of German Idealism for contemporary philosophy has received growing attention and acknowledgment throughout competing fields of contemporary philosophy. Part of the growing interest rests on the claim that the works of Kant, Fichte, Schelling, Hegel remain of considerable interest for cultural studies, sociology, theology, aesthetics and other areas of interest. In the domain of philosophy, the renaissance of innovative readings (...) of German Idealism has taken scholarly debates beyond merely antiquarian perspectives. This renaissance has been a major factor of current efforts to bridge the gap between so-called “analytic” and so-called “continental” philosophy. The volume provides a selection of well-chosen examples of readings that contribute to systematic treatments of philosophical problems. It contains contributions by Markus Gabriel, Robert Pippin, Anders Moe Rasmussen, Sebastian Rödl. (shrink)
Martha Nussbaum’s capabilities approach is today one of the most influential theories of justice. In her earlier works on the capabilities approach, Nussbaum only applies it to humans, but in later works she extends the capabilities approach to include sentient animals. Contrary to Nussbaum’s own view, some scholars, for example, David Schlosberg, Teea Kortetmäki and Daniel L. Crescenzo, want to extend the capabilities approach even further to include collective entities, such as species and ecosystems. Though I think we have strong (...) reasons for preserving ecosystems and species within the capabilities approach, there are several problems with ascribing capabilities to them, especially if we connect it with the view that species and ecosystems are subjects of justice. These problems are partly a consequence of the fact that an ascription of capabilities to species and ecosystems needs to be based on an overlapping consensus between different comprehensive doctrines, in accordance with the framework of political liberalism on which the capabilities approach builds. First, the ascription of capabilities to species and ecosystems presupposes the controversial standpoint that they are objectively existing entities. Second, the ascription of capabilities to ecosystems and species and the view that they are subjects of justice is justified by claiming that they have integrity and agency, but these characteristics have different meanings when applied to collective entities and humans, respectively. Third, the view that species and ecosystems are subjects of justice seems to require the controversial assumption that they have interests of their own, which differ from the interests of the sentient beings that are part of them. However, even if we do not ascribe capabilities to species and ecosystems and regard them as subjects of justice, there are still strong reasons to protect them within the capabilities approach, as the preservation of ecosystems and species is an important precondition for many human and animal capabilities. (shrink)
Inference has long been a concern in epistemology, as an essential means by which we extend our knowledge and test our beliefs. Inference is also a key notion in influential psychological or philosophical accounts of mental capacities, from perception via utterance comprehension to problem-solving. Consciousness, on the other hand, has arguably been the defining interest of philosophy of mind over recent decades. Comparatively little attention, however, has been devoted to the significance of consciousness for the proper understanding of the nature (...) and role of inference. It is commonly suggested that inference may be either conscious or unconscious. Yet how unified are these various supposed instances of inference? Does either enjoy explanatory priority in relation to the other? In what ways or senses, can an inference be conscious, or fail to be conscious, and how does this matter? This book brings together original essays from established scholars and emerging theorists that illustrate how several current debates in epistemology, philosophy of psychology, and philosophy of mind can benefit from reflections on these and related questions about the significance of consciousness for inference. Contributors include: Kirk Ludwig and Wade Munroe; Michael Rescorla; Federico Bongiorno and Lisa Bortolotti; Berit Brogaard; Nicholas Allott; Jake Quilty-Dunn and Eric Mandelbaum; Corine Besson; Anders Nes; David Henderson, Terry Horgan, and Matjaž Potrč; Elijah Chudnoff; and Ram Neta. (shrink)
In this paper, I argue that sufficientarian principles are indispensable in the set of principles that have bearing on issues in distributive ethics. I provide two arguments in favor of this claim. First, I argue that sufficientarianism is the only framework that allows us to appropriately analyze what sort of obligations we have toward individuals who are badly off due to their own faults and choices. Second, I argue that sufficientarianism is the only theory that provides an adequate framework for (...) thinking about the desperately badly off. (shrink)
What picture do we get when wie apply gender analysis to mainstream moral philosophy? Starting with an analysis of Kant, Hume and Rawls, Pauer–Studer develops the categorical framework of a moral theory that is free from any "male bias".
The principle of need—the idea that resources should be allocated according to need—is often invoked in priority setting in the health care sector. In this article, I argue that a reasonable principle of need must be indeterminate, and examine three different ways that this can be dealt with: appendicizing the principle with further principles, imposing determinacy, or empowering decision makers. I argue that need must be conceptualized as a composite property composed of at least two factors: health shortfall and capacity (...) to benefit. When one examines how the different factors relate to each other, one discovers that this is sometimes indeterminate. I illustrate this indeterminacy in this article by applying the small improvement argument. If the relation between the factors are always determinate, the comparative relation changes by a small adjustment. Yet, if two needs are dissimilar but of seemingly equal magnitude, the comparative relation does not change by a small adjustment of one of the factors. I then outline arguments in favor of each of the three strategies for dealing with indeterminacy, but also point out that all strategies have significant shortcomings. More research is needed concerning how to deal with this indeterminacy, and the most promising path seems to be to scrutinize the position of the principle of need among a plurality of relevant principles for priority setting in the health care sector. (shrink)
The paper addresses the phenomenology of inference. It proposes that the conscious character of conscious inferences is partly constituted by a sense of meaning; specifically, a sense of what Grice called ‘natural meaning’. In consciously drawing the (outright, categorical) conclusion that Q from a presumed fact that P, one senses the presumed fact that P as meaning that Q, where ‘meaning that’ expresses natural meaning. This sense of natural meaning is phenomenologically analogous, I suggest, to our sense of what is (...) said in fluently comprehending everyday utterances in our first language. The proposal that conscious inference involves a sense of natural meaning is compared with views according to which conscious inference involves taking the premises (i) to be good reasons for the conclusion (as defended by Thomson and Grice), (ii) to support it (as argued by Audi and, recently, Boghossian), or (iii) to imply it (as lately contended by Broome). I argue our proposal can explain certain phenomena handled by alternatives (i) and (ii), but that some further phenomena is handled by our account but not these alternatives. In relation to alternative (iii), I argue that, in so far as implicational and natural-meaning relations come apart, the latter are a better fit for what we sense or take to be so in conscious inference. (shrink)
‘Language and End Time’ is a translation of Sections I, IV and V of ‘Sprache und Endzeit’, a substantial essay by Günther Anders that was published in eight instalments in the Austrian journal FORVM from 1989 to 1991. The original essay was planned for inclusion in the third volume of The Obsolescence of Human Beings. ‘Language and End Time’ builds on the diagnosis of ‘our blindness toward the apocalypse’ that was advanced in the first volume of The Obsolescence in (...) 1956. The essay asks if there is a language that is capable of making us fully comprehend the looming ‘man-made apocalypse’. In response to this, it offers a critique of philosophical jargon and of the putatively ‘objective’ language of science, which are both dismissed as unsuitable. Sections I, IV and V introduce this core problematic. The selection of this text for inclusion in this special journal issue responds to present-day realities that inscribe Anders’s reflections on nuclear science and the nuclear situation into new contexts. The critique that ‘Language and End Time’ advances resonates with the way in which the decisions of a few companies and individuals are shaping the future of life on earth. At the same time, the wider stakes of Anders’s turn against the language employed by scientists are newly laid bare by the realities and politics of climate change and fake news. In this new context, the language of science is all too readily dismissed as if it were a mere idiom that can be ignored without consequence. It is against the backdrop of a future that is, if anything, more uncertain than at the time of Anders’s writing, that the essay’s reflections on popularisation, the limits of language and the nature of truth gain added significance. (shrink)
The-Predicativism is the view that names are count nouns. For example, the meaning of the name ‘Louise’ is roughly the property of being called Louise. Moreover, proponents of this view maintain that names that are ostensibly in argument position of a predicate are covert definite descriptions. In recent years, The-Predicativism has acquired a number of new supporters, mainly Elbourne (), Matushansky (), and Fara (). And while it was pointed out by Kripke () that these kinds of views generally struggle (...) with capturing the rigidity of proper names, these new views are alleged to solve this problem. In this paper I argue that the more recent versions of the view continue to struggle. In particular, I show that the views fail to provide an explanatory and/or empirically adequate analysis of rigidity. My discussions of these views are then supplemented with a general diagnosis of the problem and an explanation of why it is unlikely to be solved by The-Predicativism. (shrink)
A wide range of gene knockout experiments shows that functional stability is an important feature of biological systems. On this backdrop, we present an argument for higher‐level causation based on counterfactual dependence. Furthermore, we sketch a metaphysical picture providing resources to explain the metaphysical nature of functional stability, higher‐level causation, and the relevant notion of levels. Our account aims to clarify the role empirical results and philosophical assumptions should play in debates about reductionism and higher‐level causation. It thereby contributes to (...) the development of a philosophical foundation for systems biology. †To contact the authors, please write to: CSMN/IFIKK, University of Oslo, Box 1020 Blindern, N‐0315, Oslo, Norway; e‐mail: anders[email protected] , [email protected] (shrink)
Todd (2016) proposes an analysis of future-directed sentences, in particular sentences of the form 'will(φ)', that is based on the classic Russellian analysis of definite descriptions. Todd's analysis is supposed to vindicate the claim that the future is metaphysically open while retaining a simple Ockhamist semantics of future contingents and the principles of classical logic, i.e. bivalence and the law of excluded middle. Consequently, an open futurist can straightforwardly retain classical logic without appeal to supervaluations, determinacy operators, or any further (...) controversial semantical or metaphysical complication. In this paper, we will show that this quasi-Russellian analysis of 'will' both lacks linguistic motivation and faces a variety of significant problems. In particular, we show that the standard arguments for Russell's treatment of definite descriptions fail to apply to statements of the form 'will(φ)'. (shrink)
This paper provides an investigation of Ignorance Inferences by looking at the superlative modifier at least. The formal properties of these inferences are characterized in terms of the epistemic conditions that they impose on the speaker, thereby establishing how much can and must be inferred about what the speaker is ignorant about. The paper makes two main contributions. First, it argues that the form of these inferences depends solely on the structural properties of the expression that at least is modifying, (...) which do not necessarily coincide with semantic entailment. Rather, rank and order seems to matter: with totally ordered associates, at least triggers Ignorance Inferences that may be formally different than those obtained with partially ordered associates ). Second, it builds on neo-Gricean double alternative generation mechanisms ) arguing that one of them must be provided by focus. (shrink)
Anders Pettersson presents a comprehensive account of the foundations of literature, grounded in an original analysis of the interactions between author and reader. Drawing on post-Gricean pragmatics and Nicholas Wolterstorff's notion of presentationality, Pettersson develops the idea of the verbal text and conveys an integrated and nuanced understanding of literary experience, its conditions, and the values it affords. In the second part of Verbal Art he systematically examines the cognitive, affective, and formal aspects of the literary work and explores (...) their interrelations. (shrink)
Many philosophers hold that the phenomenology of thinking (also known as cognitive phenomenology) reduces to the phenomenology of the speech, sensory imagery, emotions or feelings associated with it. But even if this reductionist claim is correct, there is still a properly cognitive dimension to the phenomenology of at least some thinking. Specifically, conceptual content makes a constitutive contribution to the phenomenology of at least some thought episodes, in that it constitutes what I call their thematic unity. Often, when a thought (...) episode has a phenomenal character, the various associated speech, sensory imagery, emotions or feelings are often organized around a common theme, constituted by the conceptual content of one's thinking. (shrink)
According to perceptualism, fluent comprehension of speech is a perceptual achievement, in as much as it is akin to such high-level perceptual states as the perception of objects as cups or trees, or of people as happy or sad. According to liberalism, grasp of meaning is partially constitutive of the phenomenology of fluent comprehension. I here defend an influential line of argument for liberal perceptualism, resting on phenomenal contrasts in our comprehension of speech, due to Susanna Siegel and Tim Bayne, (...) against objections from Casey O'Callaghan and Indrek Reiland. I concentrate on the contrast between the putative immediacy of meaning-assignment in fluent comprehension, as compared with other, less ordinary, perhaps translation-based ways of getting at the meaning of speech. I argue this putative immediacy is difficult to capture on a non-perceptual view (whether liberal or non-liberal), and that the immediacy in question has much in common with that which applies in other, less controversial cases of high-level perception. (shrink)
It is widely agreed that sentences containing a non-denoting description embedded in the scope of a propositional attitude verb have true de dicto interpretations, and Russell's (1905) analysis of definite descriptions is often praised for its simple analysis of such cases, cf. e.g. Neale (1990). However, several people, incl. Elbourne (2005, 2009), Heim (1991), and Kripke (2005), have contested this by arguing that Russell's analysis yields incorrect predictions in non-doxastic attitude contexts. Heim and Elbourne have subsequently argued that once certain (...) facts about presupposition projection are fully appreciated, the Frege/Strawson analysis of definite descriptions has an explanatory advantage. In this paper, I argue that both Russell's analysis and the Frege/Strawson analysis face a serious problem when it comes to the interaction of attitude verbs and definite descriptions. I argue that the problem observed by Elbourne, Heim, and Kripke is much more general than standardly assumed and that a solution requires a revision of the semantics of definite and indefinite descriptions. I outline the conditions that are required to solve the problem and present an analysis couched in dynamic semantics which can provide a solution. I conclude by discussing some further issues related to propositional attitude verbs that complicate a fully general solution to the problem. (shrink)
That wonder is educationally important will strike many people as obvious. And in a way it is obvious, because being capable of experiencing wonder implies an openness to experience and seems naturally allied to intrinsic educational motivation, an eagerness to inquire, a desire to understand, and also to a willingness to suspend judgement and bracket existing—potentially limiting—ways of thinking, seeing, and categorising. Yet wonder is not a single thing, and it is important to distinguish at least two kinds of wonder: (...) active wonder, which entails a drive to explore, to find out, to explain; and deep or contemplative wonder, which is not inherently inquisitive like active wonder and, as a response to mystery, may leave us lost for words. Claims for wonder's importance to education and science often do not distinguish between the two, but whereas for active wonder that importance seems obvious, this is much less so for deep wonder, which by its very nature rather seems to be anti-educational. Yet in this paper I explore exactly the educational importance of deep wonder. This importance is found to lie, not just in its motivational effects—real though they are—but in making us attend to the world for its own sake, and making us aware of the limits of our understanding. (shrink)
This article offers a discussion of the connection between technology and values and, specifically, I take a closer look at ethically sound design. In order to bring the discussion into a concrete context, the theory of Value Sensitive Design (VSD) will be the focus point. To illustrate my argument concerning design ethics, the discussion involves a case study of an augmented window, designed by the VSD Research Lab, which has turned out to be a potentially surveillance-enabling technology. I call attention (...) to a “positivist problem” that has to do with the connection between the design context and the use context, which VSD seems to presuppose, and I argue that it is necessary to clearly distinguish between the two, since the designers’ intentions do not always correspond with the users’ practice; in fact, the relation between design and use is very complex and principally unpredictable. Thus, a design theory must accept that foresight is limited to anticipation rather than prediction. To overcome the positivist problem, I suggest a phenomenological approach to technology inspired by Don Ihde’s concept of multistability. This argument, which is general in nature and thus applies to any theory of design ethics, is intended as a constructive criticism, which can hopefully contribute to the further development of design ethics. (shrink)
Human beings are a marvel of evolved complexity. Such systems can be difficult to enhance. When we manipulate complex evolved systems, which are poorly understood, our interventions often fail or backfire. It can appear as if there is a “wisdom of nature” which we ignore at our peril. Sometimes the belief in nature’s wisdom—and corresponding doubts about the prudence of tampering with nature, especially human nature—manifests as diffusely moral objections against enhancement. Such objections may be expressed as intuitions about the (...) superiority of the natural or the troublesomeness of hubris or as an evaluative bias in favor of the status quo. This chapter explores the extent to which such prudence-derived anti-enhancement sentiments are justified. We develop a heuristic, inspired by the field of evolutionary medicine, for identifying promising human enhancement interventions. The heuristic incorporates the grains of truth contained in “nature knows best” attitudes while providing criteria for the special cases where we have reason to believe that it is feasible for us to improve on nature. (shrink)
This paper argues that decision problems and money-pump arguments should not be a deciding factor against accepting non-transitive better than relations. If the reasons to accept normative standpoints that entail a non-transitive better than relation are compelling enough, we ought to revise our decision method rather than the normative standpoints. The paper introduces the most common argument in favor of non-transitive better than relations. It then illustrates that there are different ways to reconceptualize rational choice so that rational choice is (...) possible also when the relevant better than relation is non-transitive. (shrink)
That wonder is educationally important will strike many people as obvious. And in a way it is obvious, because being capable of experiencing wonder implies an openness to experience and seems naturally allied to intrinsic educational motivation, an eagerness to inquire, a desire to understand, and also to a willingness to suspend judgement and bracket existing—potentially limiting—ways of thinking, seeing, and categorising. Yet wonder is not a single thing, and it is important to distinguish at least two kinds of wonder: (...) active wonder, which entails a drive to explore, to find out, to explain; and deep or contemplative wonder, which is not inherently inquisitive like active wonder and, as a response to mystery, may leave us lost for words. Claims for wonder's importance to education and science often do not distinguish between the two, but whereas for active wonder that importance seems obvious, this is much less so for deep wonder, which by its very nature rather seems to be anti-educational. Yet in this paper I explore exactly the educational importance of deep wonder. This importance is found to lie, not just in its motivational effects—real though they are—but in making us attend to the world for its own sake, and making us aware of the limits of our understanding. (shrink)
Artificial Intelligence as a buzzword and a technological development is presently cast as the ultimate ‘game changer’ for economy and society; a technology of which we cannot be the master, but which nonetheless will have a pervasive influence on human life. The fast pace with which the multi-billion dollar AI industry advances toward the creation of human-level intelligence is accompanied by an increasingly exaggerated chorus of the ‘incredible miracle’, or the ‘incredible horror’, intelligent machines will constitute for humanity, as the (...) human is gradually replaced by a technologically superior proxy, destined to be configured as a functional component at best, a relic at worst. More than half a century ago, Günther Anders sketched out this path toward technological obsolescence, and his work on ‘Promethean shame’ and ‘Promethean discrepancy’ provides an invaluable means with which to recognise and understand the relationship of the modern human to his/her technological products. In this article, I draw on Anders’s writings to unpack and unsettle contemporary narratives of our relation to AI, with a view toward refocusing attention on the responsibilities we bear in producing such immersive technologies. With Anders, I suggest that we must exercise and develop moral imagination so that the human capacity for moral responsibility does not atrophy in our technologically mediated future. (shrink)
Ethical dilemmas are common in the neonatal intensive care setting. The aim of the present study was to investigate the opinions of Swedish physicians and the general public on treatment decisions regarding a newborn with severe brain damage. We used a vignette-based questionnaire which was sent to a random sample of physicians (n = 628) and the general population (n = 585). Respondents were asked to provide answers as to whether it is acceptable to discontinue ventilator treatment, and when it (...) actually is discontinued whether or not it was acceptable to use drugs which hasten death unintentionally or intentionally. The response rate was 67 % of physicians and 46 % of the general population. A majority of both physicians [56 % (CI 50–62)] and the general population [53 % (CI 49–58)] supported arguments for withdrawing ventilator treatment. A large majority in both groups supported arguments for alleviating the patient’s symptoms even if the treatment hastened death, but the two groups display significantly different views on whether or not to provide drugs with the additional intention of hastening death, although the difference disappeared when we compared subgroups of those who were for or against euthanasia-like actions. The study indicated that physicians and the general population have similar opinions regarding discontinuing life-sustaining treatment and providing effective drugs which might unintentionally hasten death but seem to have different views on intentions. The results might be helpful to physicians wanting to examine their own intentions when providing adequate treatment at the end of life. (shrink)
Since the famous debate between Russell (Mind 14: 479–493, 1905, Mind 66: 385–389, 1957) and Strawson (Mind 59: 320–344, 1950; Introduction to logical theory, 1952; Theoria, 30: 96–118, 1964) linguistic intuitions about truth values have been considered notoriously unreliable as a guide to the semantics of definite descriptions. As a result, most existing semantic analyses of definites leave a large number of intuitions unexplained. In this paper, I explore the nature of the relationship between truth value intuitions and non-referring definites. (...) Inspired by comments in Strawson (Introduction to logical theory, 1964), I argue that given certain systematic considerations, one can provide a structured explanation of conflicting intuitions. I show that the intuitions of falsity, which proponents of a Russellian analysis often appeal to, result from evaluating sentences in relation to specific questions in context. This is shown by developing a method for predicting when sentences containing non-referring definites elicit intuitions of falsity. My proposed analysis draws importantly on Roberts (in: Yoon & Kathol (eds.) OSU working papers in Linguistics: vol. 49: Papers in Semantics 1998; in: Horn & Ward (eds.) Handbook of pragmatics, 2004) and recent research in the semantics and pragmatics of focus. (shrink)
The aim of the present study was to corroborate or undermine a previously presented conjecture that physicians’ estimations of others’ opinions are influenced by their own opinions. We used questionnaire based cross-sectional design and described a situation where an imminently dying patient was provided with alleviating drugs which also shortened life and, additionally, were intended to do so. We asked what would happen to physicians’ own trust if they took the action described, and also what the physician estimated would happen (...) to the general publics’ trust in health services. Decrease of trust was used as surrogate for an undesirable action. The results are presented as proportions with a 95 % Confidence Interval. Statistical analysis was based on inter-rater agreement -test as well as χ2 test and Odds Ratio with 95 % CI. We found a moderate inter-rater agreement between what would happen with the physicians’ own trust in healthcare and their estimations of what would happen with the general population’s trust. We identified a significant difference between being pro et contra the treatment with double intentions and the estimation of the general population’s trust. Focusing on either decreasing or increasing own trust and being pro or contra the action we identified a strong association [OR 79 ]. Although the inter-rater agreement in the present study was somewhat weaker compared to a study about the explicit use of the term ‘physicians assisted suicide’ we found that our hypothesis—physicians’ estimations of others’ opinions are influenced by their own opinions—was corroborated. This might have implications in research as well as in clinical decision-making. We suggest that Merton’s ideal of disinterestedness should be highlighted. (shrink)
This article analyzes approaches to nondeterminacy that suggest that one can make justified choices when primary criteria fail to fully determine a best alternative by introducing a secondary criterion. It is shown that these approaches risk violating Basic Contraction Consistency. Some ways of adjusting two-step models in order to protect against this are addressed, and it is suggested that proponents of two-step models should adopt formal conditions which qualify what counts as a permissible secondary criterion that resemble supervaluationist conditions that (...) qualify what counts as admissible precisifications of vague terms. (shrink)
Much research aimed at developing measures for normative criteria to guide the assessment of healthcare resource allocation decisions has focused on health maximization, equity concerns and more recently approaches based on health capabilities. However, a widely embraced idea is that health resources should be allocated to meet health needs. Little attention has been given to the principle of need which is often mentioned as an alternative independent criteria that could be used to guide healthcare evaluations. This paper develops a model (...) and indicator of need satisfaction that aggregates the health needs of a population in a particular time period into a single measure that weights individual health needs by the severity of their ill health. The paper provides a first step towards formalizing the principle of need as a measurable objective for healthcare policy and we discuss some challenges for future research, including incorporating the duration of time into need-based health evaluations. (shrink)
The volume _SocioAesthetics: Ambience – Imaginary_ collects scholars from social science, aesthetics, arts, and cultural studies in case-driven debate, ranging from biometrics to luxury commodities, on how a new alignment of aesthetics and the social is possible and what the possible prospects of this may be.