Research that is initiated, designed or funded by sponsor agencies based in countries with relatively high social and economic development, and conducted in countries that are relatively less developed, gives rise to many important ethical challenges. Although clinical trials of HIV vaccines began ten years ago in the US and Europe, an increasing number of trials are now being conducted or planned in other countries, including several that are considered “developing” countries. Safeguarding the rights and welfare of individuals participating as (...) research subjects in developing countries is a priority. In September, 1997, the Joint United Nations Programme on HIV/AIDS embarked on a process of international consultation; its purpose was further to define the important ethical issues and to formulate guidance that might facilitate the ethical design and conduct of HIV vaccine trials in international contexts. This paper summarises the major outcomes of the UNAIDS consultative process. (shrink)
For valid informed consent, it is crucial that patients or research participants fully understand all that their consent entails. Testing and revising informed consent documents with the assistance of their addressees can improve their understandability. In this study we aimed at further developing a method for testing and improving informed consent documents with regard to readability and test-readers’ understanding and reactions. We tested, revised, and retested template informed consent documents for biobank research by means of 11 focus group interviews with (...) members from the documents’ target population. For the analysis of focus group excerpts we used qualitative content analysis. Revisions were made based on focus group feedback in an iterative process. Focus group participants gave substantial feedback on the original and on the revised version of the tested documents. Revisions included adding and clarifying explanations, including an info-box summarizing the main points of the text and an illustrative graphic. Our results indicate positive effects on the tested and revised informed consent documents in regard to general readability and test-readers’ understanding and reactions. Participatory methods for improving informed consent should be more often applied and further evaluated for both, medical interventions and clinical research. Particular conceptual and methodological challenges need to be addressed in the future. (shrink)
The purpose of this Article is to consider a novel framework for institutional shareholders’ activism in the United States. This new activism framework would be aimed at improving, at minimal costs, the performance of the portfolio companies in which institutional shareholders invest. The Article begins by laying out this new activism framework and then compares the proposed framework with the prevalent mode of activism through hedge funds. The Article concludes with a discussion of certain implementation challenges, and calls for future (...) research into the proposed activism framework. (shrink)
It would be unkind but not inaccurate to say that most experimental philosophy is just psychology with worse methods and better theories. In Experimental Ethics: Towards an Empirical Moral Philosophy, Christoph Luetge, Hannes Rusch, and Matthias Uhl set out to make this comparison less invidious and more flattering. Their book has 16 chapters, organized into five sections and bookended by the editors’ own introduction and prospectus. Contributors hail from four countries (Germany, USA, Spain, and the United Kingdom) and five (...) disciplines (philosophy, psychology, cognitive science, economics, and sociology). While the chapters are of mixed quality and originality, there are several fine contributions to the field. These especially include Stephan Wolf and Alexander Lenger’s sophisticated attempt to operationalize the Rawlsian notion of a veil of ignorance, Nina Strohminger et al.’s survey of the methods available to experimental ethicists for studying implicit morality, Fernando Aguiar et al.’s exploration of the possibility of operationalizing reflective equilibrium in the lab, and Nikil Mukerji’s careful defusing of three debunking arguments about the reliability of philosophical intuitions. (shrink)
This article will discuss the ongoing development of a Marxist theory of international relations. Examining the work of Hannes Lacher and that of the contributors toMarxism and World Politicsreveals an overarching concern amongst this group of scholars to engage with the central concerns of the discipline of International Relations – the nature of the state, anarchy, and war. Their analysis provides an excellent starting point for the development of a Marxist approach to international relations.
This volume is based on papers presented at a conference on defeasibility in ethics, epistemology, law, and logic that took place at the Goethe University in Frankfurt in 2010. The subtitle (“Knowledge, Agency, Responsibility, and the Law”) better reflects the content than does the title of the original conference. None of the papers focuses directly or primarily on defeasible reasoning in logic, though a few touch on this indirectly. Nor are the papers evenly split among the topics. Six are primarily (...) about epistemology, four about responsibility, and one each focuses on agency and the law. (shrink)
This volume is a collection of essays presented at the 31st International Wittgenstein Symposium, Kirchberg, in August 2008. It has the character of a high-quality journal issue. There is no introduction, and the papers do not all directly bear on the topic of the original conference, which was "Reduction and Elimination in Philosophy and the Sciences". In what follows, I offer a short description of each paper, and add critical remarks in some cases.
Öffentlichkeit ist ein, wenn nicht das zentrale Konzept im Denken Hannah Arendts. Doch obwohl der Begriff in allen philosophischen und essayistischen Schriften Arendts eine ausgezeichnete Stellung einnimmt, wurde und wird 'Öffentlichkeit' bei ihr meist nur einseitig im politischen Sinn rezipiert. Hannes Bajohr hingegen zeigt, dass er Dimensionen besitzt, die über diese konventionelle Interpretation hinausgehen: Öffentlichkeit wird bei Arendt zu einer Bedingung von Erkenntnis und hat epistemologische Bedeutung.
In Epistemic Entitlement. The Right to Believe Hannes Ole Matthiessen develops a social externalist account of epistemic entitlement and perceptual knowledge. The basic idea is that positive epistemic status should be understood as a specific kind of epistemic right, that is a right to believe. Since rights have consequences for how others are required to treat the bearer of the right, they have to be publicly accessible. The author therefore suggests that epistemic entitlement can plausibly be conceptualized as a (...) status that is grounded in a publicly observable perceptual situation, rather than in a perceptual experience as current theories of epistemic entitlement state. It is then argued that such a social externalist account of entitlement, in which the perceiver's epistemic perspective becomes relevant only in the exceptional case where an entitlement is challenged, can nevertheless do justice to our central intuitions about first-personal epistemic phenomenology. (shrink)
This essay develops a joint theory of rational (all-or-nothing) belief and degrees of belief. The theory is based on three assumptions: the logical closure of rational belief; the axioms of probability for rational degrees of belief; and the so-called Lockean thesis, in which the concepts of rational belief and rational degree of belief figure simultaneously. In spite of what is commonly believed, this essay will show that this combination of principles is satisfiable (and indeed nontrivially so) and that the principles (...) are jointly satisfied if and only if rational belief is equivalent to the assignment of a stably high rational degree of belief. Although the logical closure of belief and the Lockean thesis are attractive postulates in themselves, initially this may seem like a formal “curiosity”; however, as will be argued in the rest of the essay, a very reasonable theory of rational belief can be built around these principles that is not ad hoc and that has various philosophical features that are plausible independently. In particular, this essay shows that the theory allows for a solution to the Lottery Paradox, and it has nice applications to formal epistemology. The price that is to be paid for this theory is a strong dependency of belief on the context, where a context involves both the agent's degree of belief function and the partitioning or individuation of the underlying possibilities. But as this essay argues, that price seems to be affordable. This essay develops a joint theory of rational (all-or-nothing) belief and degrees of belief. The theory is based on three assumptions: the logical closure of rational belief; the axioms of probability for rational degrees of belief; and the so-called Lockean thesis, in which the concepts of rational belief and rational degree of belief figure simultaneously. In spite of what is commonly believed, I will show that this combination of principles is satisfiable (and indeed nontrivially so) and that the principles are jointly satisfied if and only if rational belief is equivalent to the assignment of a stably high rational degree of belief. Although the logical closure of belief and the Lockean thesis are attractive postulates in themselves, initially this may seem like a formal “curiosity”; however, as I am going to argue in the rest of the essay, a very reasonable theory of rational belief can be built around these principles that is not ad hoc but that has various philosophical features that are plausible independently. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...) follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
In everyday life we either express our beliefs in all-or-nothing terms or we resort to numerical probabilities: I believe it's going to rain or my chance of winning is one in a million. The Stability of Belief develops a theory of rational belief that allows us to reason with all-or-nothing belief and numerical belief simultaneously.
Is it possible to give an explicit definition of belief in terms of subjective probability, such that believed propositions are guaranteed to have a sufficiently high probability, and yet it is neither the case that belief is stripped of any of its usual logical properties, nor is it the case that believed propositions are bound to have probability 1? We prove the answer is ‘yes’, and that given some plausible logical postulates on belief that involve a contextual “cautiousness” threshold, there (...) is but one way of determining the extension of the concept of belief that does the job. The qualitative concept of belief is not to be eliminated from scientific or philosophical discourse, rather, by reducing qualitative belief to assignments of resiliently high degrees of belief and a “cautiousness” threshold, qualitative and quantitative belief turn out to be governed by one unified theory that offers the prospects of a huge range of applications. Within that theory, logic and probability theory are not opposed to each other but go hand in hand. (shrink)
In discussions about whether the Principle of the Identity of Indiscernibles is compatible with structuralist ontologies of mathematics, it is usually assumed that individual objects are subject to criteria of identity which somehow account for the identity of the individuals. Much of this debate concerns structures that admit of non-trivial automorphisms. We consider cases from graph theory that violate even weak formulations of PII. We argue that (i) the identity or difference of places in a structure is not to be (...) accounted for by anything other than the structure itself and that (ii) mathematical practice provides evidence for this view. We want to thank Leon Horsten, Jeff Ketland, Øystein Linnebo, John Mayberry, Richard Pettigrew, and Philip Welch for valuable comments on drafts of this paper. We are especially grateful to Fraser MacBride for correcting our interpretation of two of his papers and for other helpful comments. CiteULike Connotea Del.icio.us What's this? (shrink)
In this study we investigate the influence of reason-relation readings of indicative conditionals and ‘and’/‘but’/‘therefore’ sentences on various cognitive assessments. According to the Frege-Grice tradition, a dissociation is expected. Specifically, differences in the reason-relation reading of these sentences should affect participants’ evaluations of their acceptability but not of their truth value. In two experiments we tested this assumption by introducing a relevance manipulation into the truth-table task as well as in other tasks assessing the participants’ acceptability and probability evaluations. Across (...) the two experiments a strong dissociation was found. The reason-relation reading of all four sentences strongly affected their probability and acceptability evaluations, but hardly affected their respective truth evaluations. Implications of this result for recent work on indicative conditionals are discussed. (shrink)
What kinds of sentences with truth predicate may be inserted plausibly and consistently into the T-scheme? We state an answer in terms of dependence: those sentences which depend directly or indirectly on non-semantic states of affairs (only). In order to make this precise we introduce a theory of dependence according to which a sentence φ is said to depend on a set Φ of sentences iff the truth value of φ supervenes on the presence or absence of the sentences of (...) Φ in/from the extension of the truth predicate. Both φ and the members of Φ are allowed to contain the truth predicate. On that basis we are able define notions such as ungroundedness or self-referentiality within a classical semantics, and we can show that there is an adequate definition of truth for the class of sentences which depend on non-semantic states of affairs. (shrink)
If an agent believes that the probability of E being true is 1/2, should she accept a bet on E at even odds or better? Yes, but only given certain conditions. This paper is about what those conditions are. In particular, we think that there is a condition that has been overlooked so far in the literature. We discovered it in response to a paper by Hitchcock (2004) in which he argues for the 1/3 answer to the Sleeping Beauty problem. (...) Hitchcock argues that this credence follows from calculating her fair betting odds, plus the assumption that Sleeping Beauty’s credences should track her fair betting odds. We will show that this last assumption is false. Sleeping Beauty’s credences should not follow her fair betting odds due to a peculiar feature of her epistemic situation. (shrink)
Some authors have claimed that ante rem structuralism has problems with structures that have indiscernible places. In response, I argue that there is no requirement that mathematical objects be individuated in a non-trivial way. Metaphysical principles and intuitions to the contrary do not stand up to ordinary mathematical practice, which presupposes an identity relation that, in a sense, cannot be defined. In complex analysis, the two square roots of –1 are indiscernible: anything true of one of them is true of (...) the other. I suggest that i functions like a parameter in natural deduction systems. I gave an early version of this paper at a workshop on structuralism in mathematics and science, held in the Autumn of 2006, at Bristol University. Thanks to the organizers, particularly Hannes Leitgeb, James Ladyman, and Øystein Linnebo, to my commentator Richard Pettigrew, and to the audience there. The paper also benefited considerably from a preliminary session at the Arché Research Centre at the University of St Andrews. I am indebted to my colleagues Craige Roberts, for help with the linguistics literature, and Ben Caplan and Gabriel Uzquiano, for help with the metaphysics. Thanks also to Hannes Leitgeb and Jeffrey Ketland for reading an earlier version of the manuscript and making helpful suggestions. I also benefited from conversations with Richard Heck, John Mayberry, Kevin Scharp, and Jason Stanley. CiteULike Connotea Del.icio.us What's this? (shrink)