For valid informed consent, it is crucial that patients or research participants fully understand all that their consent entails. Testing and revising informed consent documents with the assistance of their addressees can improve their understandability. In this study we aimed at further developing a method for testing and improving informed consent documents with regard to readability and test-readers’ understanding and reactions. We tested, revised, and retested template informed consent documents for biobank research by means of 11 focus group interviews with (...) members from the documents’ target population. For the analysis of focus group excerpts we used qualitative content analysis. Revisions were made based on focus group feedback in an iterative process. Focus group participants gave substantial feedback on the original and on the revised version of the tested documents. Revisions included adding and clarifying explanations, including an info-box summarizing the main points of the text and an illustrative graphic. Our results indicate positive effects on the tested and revised informed consent documents in regard to general readability and test-readers’ understanding and reactions. Participatory methods for improving informed consent should be more often applied and further evaluated for both, medical interventions and clinical research. Particular conceptual and methodological challenges need to be addressed in the future. (shrink)
The purpose of this Article is to consider a novel framework for institutional shareholders’ activism in the United States. This new activism framework would be aimed at improving, at minimal costs, the performance of the portfolio companies in which institutional shareholders invest. The Article begins by laying out this new activism framework and then compares the proposed framework with the prevalent mode of activism through hedge funds. The Article concludes with a discussion of certain implementation challenges, and calls for future (...) research into the proposed activism framework. (shrink)
It would be unkind but not inaccurate to say that most experimental philosophy is just psychology with worse methods and better theories. In Experimental Ethics: Towards an Empirical Moral Philosophy, Christoph Luetge, Hannes Rusch, and Matthias Uhl set out to make this comparison less invidious and more flattering. Their book has 16 chapters, organized into five sections and bookended by the editors’ own introduction and prospectus. Contributors hail from four countries (Germany, USA, Spain, and the United Kingdom) and five (...) disciplines (philosophy, psychology, cognitive science, economics, and sociology). While the chapters are of mixed quality and originality, there are several fine contributions to the field. These especially include Stephan Wolf and Alexander Lenger’s sophisticated attempt to operationalize the Rawlsian notion of a veil of ignorance, Nina Strohminger et al.’s survey of the methods available to experimental ethicists for studying implicit morality, Fernando Aguiar et al.’s exploration of the possibility of operationalizing reflective equilibrium in the lab, and Nikil Mukerji’s careful defusing of three debunking arguments about the reliability of philosophical intuitions. (shrink)
This article will discuss the ongoing development of a Marxist theory of international relations. Examining the work of Hannes Lacher and that of the contributors toMarxism and World Politicsreveals an overarching concern amongst this group of scholars to engage with the central concerns of the discipline of International Relations – the nature of the state, anarchy, and war. Their analysis provides an excellent starting point for the development of a Marxist approach to international relations.
This volume is based on papers presented at a conference on defeasibility in ethics, epistemology, law, and logic that took place at the Goethe University in Frankfurt in 2010. The subtitle (“Knowledge, Agency, Responsibility, and the Law”) better reflects the content than does the title of the original conference. None of the papers focuses directly or primarily on defeasible reasoning in logic, though a few touch on this indirectly. Nor are the papers evenly split among the topics. Six are primarily (...) about epistemology, four about responsibility, and one each focuses on agency and the law. (shrink)
This volume is a collection of essays presented at the 31st International Wittgenstein Symposium, Kirchberg, in August 2008. It has the character of a high-quality journal issue. There is no introduction, and the papers do not all directly bear on the topic of the original conference, which was "Reduction and Elimination in Philosophy and the Sciences". In what follows, I offer a short description of each paper, and add critical remarks in some cases.
Öffentlichkeit ist ein, wenn nicht das zentrale Konzept im Denken Hannah Arendts. Doch obwohl der Begriff in allen philosophischen und essayistischen Schriften Arendts eine ausgezeichnete Stellung einnimmt, wurde und wird 'Öffentlichkeit' bei ihr meist nur einseitig im politischen Sinn rezipiert. Hannes Bajohr hingegen zeigt, dass er Dimensionen besitzt, die über diese konventionelle Interpretation hinausgehen: Öffentlichkeit wird bei Arendt zu einer Bedingung von Erkenntnis und hat epistemologische Bedeutung.
In Epistemic Entitlement. The Right to Believe Hannes Ole Matthiessen develops a social externalist account of epistemic entitlement and perceptual knowledge. The basic idea is that positive epistemic status should be understood as a specific kind of epistemic right, that is a right to believe. Since rights have consequences for how others are required to treat the bearer of the right, they have to be publicly accessible. The author therefore suggests that epistemic entitlement can plausibly be conceptualized as a (...) status that is grounded in a publicly observable perceptual situation, rather than in a perceptual experience as current theories of epistemic entitlement state. It is then argued that such a social externalist account of entitlement, in which the perceiver's epistemic perspective becomes relevant only in the exceptional case where an entitlement is challenged, can nevertheless do justice to our central intuitions about first-personal epistemic phenomenology. (shrink)
This essay develops a joint theory of rational (all-or-nothing) belief and degrees of belief. The theory is based on three assumptions: the logical closure of rational belief; the axioms of probability for rational degrees of belief; and the so-called Lockean thesis, in which the concepts of rational belief and rational degree of belief figure simultaneously. In spite of what is commonly believed, this essay will show that this combination of principles is satisfiable (and indeed nontrivially so) and that the principles (...) are jointly satisfied if and only if rational belief is equivalent to the assignment of a stably high rational degree of belief. Although the logical closure of belief and the Lockean thesis are attractive postulates in themselves, initially this may seem like a formal “curiosity”; however, as will be argued in the rest of the essay, a very reasonable theory of rational belief can be built around these principles that is not ad hoc and that has various philosophical features that are plausible independently. In particular, this essay shows that the theory allows for a solution to the Lottery Paradox, and it has nice applications to formal epistemology. The price that is to be paid for this theory is a strong dependency of belief on the context, where a context involves both the agent's degree of belief function and the partitioning or individuation of the underlying possibilities. But as this essay argues, that price seems to be affordable. This essay develops a joint theory of rational (all-or-nothing) belief and degrees of belief. The theory is based on three assumptions: the logical closure of rational belief; the axioms of probability for rational degrees of belief; and the so-called Lockean thesis, in which the concepts of rational belief and rational degree of belief figure simultaneously. In spite of what is commonly believed, I will show that this combination of principles is satisfiable (and indeed nontrivially so) and that the principles are jointly satisfied if and only if rational belief is equivalent to the assignment of a stably high rational degree of belief. Although the logical closure of belief and the Lockean thesis are attractive postulates in themselves, initially this may seem like a formal “curiosity”; however, as I am going to argue in the rest of the essay, a very reasonable theory of rational belief can be built around these principles that is not ad hoc but that has various philosophical features that are plausible independently. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...) follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
In everyday life we either express our beliefs in all-or-nothing terms or we resort to numerical probabilities: I believe it's going to rain or my chance of winning is one in a million. The Stability of Belief develops a theory of rational belief that allows us to reason with all-or-nothing belief and numerical belief simultaneously.
Is it possible to give an explicit definition of belief in terms of subjective probability, such that believed propositions are guaranteed to have a sufficiently high probability, and yet it is neither the case that belief is stripped of any of its usual logical properties, nor is it the case that believed propositions are bound to have probability 1? We prove the answer is ‘yes’, and that given some plausible logical postulates on belief that involve a contextual “cautiousness” threshold, there (...) is but one way of determining the extension of the concept of belief that does the job. The qualitative concept of belief is not to be eliminated from scientific or philosophical discourse, rather, by reducing qualitative belief to assignments of resiliently high degrees of belief and a “cautiousness” threshold, qualitative and quantitative belief turn out to be governed by one unified theory that offers the prospects of a huge range of applications. Within that theory, logic and probability theory are not opposed to each other but go hand in hand. (shrink)
In discussions about whether the Principle of the Identity of Indiscernibles is compatible with structuralist ontologies of mathematics, it is usually assumed that individual objects are subject to criteria of identity which somehow account for the identity of the individuals. Much of this debate concerns structures that admit of non-trivial automorphisms. We consider cases from graph theory that violate even weak formulations of PII. We argue that (i) the identity or difference of places in a structure is not to be (...) accounted for by anything other than the structure itself and that (ii) mathematical practice provides evidence for this view. We want to thank Leon Horsten, Jeff Ketland, Øystein Linnebo, John Mayberry, Richard Pettigrew, and Philip Welch for valuable comments on drafts of this paper. We are especially grateful to Fraser MacBride for correcting our interpretation of two of his papers and for other helpful comments. CiteULike Connotea Del.icio.us What's this? (shrink)
When do children acquire a propositional attitude folk psychology or theory of mind? The orthodox answer to this central question of developmental ToM research had long been that around age 4 children begin to apply “belief” and other propositional attitude concepts. This orthodoxy has recently come under serious attack, though, from two sides: Scoffers complain that it over-estimates children’s early competence and claim that a proper understanding of propositional attitudes emerges only much later. Boosters criticize the orthodoxy for underestimating early (...) competence and claim that even infants ascribe beliefs. In this paper, the orthodoxy is defended on empirical grounds against these two kinds of attacks. On the basis of new evidence, not only can the two attacks safely be countered, but the orthodox claim can actually be strengthened, corroborated and refined: what emerges around age 4 is an explicit, unified, flexibly conceptual capacity to ascribe propositional attitudes. This unified conceptual capacity contrasts with the less sophisticated, less unified implicit forms of tracking simpler mental states present in ontogeny long before. This refined version of the orthodoxy can thus most plausibly be spelled out in some form of 2-systems-account of theory of mind. (shrink)
In this study we investigate the influence of reason-relation readings of indicative conditionals and ‘and’/‘but’/‘therefore’ sentences on various cognitive assessments. According to the Frege-Grice tradition, a dissociation is expected. Specifically, differences in the reason-relation reading of these sentences should affect participants’ evaluations of their acceptability but not of their truth value. In two experiments we tested this assumption by introducing a relevance manipulation into the truth-table task as well as in other tasks assessing the participants’ acceptability and probability evaluations. Across (...) the two experiments a strong dissociation was found. The reason-relation reading of all four sentences strongly affected their probability and acceptability evaluations, but hardly affected their respective truth evaluations. Implications of this result for recent work on indicative conditionals are discussed. (shrink)
If an agent believes that the probability of E being true is 1/2, should she accept a bet on E at even odds or better? Yes, but only given certain conditions. This paper is about what those conditions are. In particular, we think that there is a condition that has been overlooked so far in the literature. We discovered it in response to a paper by Hitchcock (2004) in which he argues for the 1/3 answer to the Sleeping Beauty problem. (...) Hitchcock argues that this credence follows from calculating her fair betting odds, plus the assumption that Sleeping Beauty’s credences should track her fair betting odds. We will show that this last assumption is false. Sleeping Beauty’s credences should not follow her fair betting odds due to a peculiar feature of her epistemic situation. (shrink)
What kinds of sentences with truth predicate may be inserted plausibly and consistently into the T-scheme? We state an answer in terms of dependence: those sentences which depend directly or indirectly on non-semantic states of affairs (only). In order to make this precise we introduce a theory of dependence according to which a sentence φ is said to depend on a set Φ of sentences iff the truth value of φ supervenes on the presence or absence of the sentences of (...) Φ in/from the extension of the truth predicate. Both φ and the members of Φ are allowed to contain the truth predicate. On that basis we are able define notions such as ungroundedness or self-referentiality within a classical semantics, and we can show that there is an adequate definition of truth for the class of sentences which depend on non-semantic states of affairs. (shrink)
Some authors have claimed that ante rem structuralism has problems with structures that have indiscernible places. In response, I argue that there is no requirement that mathematical objects be individuated in a non-trivial way. Metaphysical principles and intuitions to the contrary do not stand up to ordinary mathematical practice, which presupposes an identity relation that, in a sense, cannot be defined. In complex analysis, the two square roots of –1 are indiscernible: anything true of one of them is true of (...) the other. I suggest that i functions like a parameter in natural deduction systems. I gave an early version of this paper at a workshop on structuralism in mathematics and science, held in the Autumn of 2006, at Bristol University. Thanks to the organizers, particularly Hannes Leitgeb, James Ladyman, and Øystein Linnebo, to my commentator Richard Pettigrew, and to the audience there. The paper also benefited considerably from a preliminary session at the Arché Research Centre at the University of St Andrews. I am indebted to my colleagues Craige Roberts, for help with the linguistics literature, and Ben Caplan and Gabriel Uzquiano, for help with the metaphysics. Thanks also to Hannes Leitgeb and Jeffrey Ketland for reading an earlier version of the manuscript and making helpful suggestions. I also benefited from conversations with Richard Heck, John Mayberry, Kevin Scharp, and Jason Stanley. CiteULike Connotea Del.icio.us What's this? (shrink)
This is part B of a paper in which we defend a semantics for counterfactuals which is probabilistic in the sense that the truth condition for counterfactuals refers to a probability measure. Because of its probabilistic nature, it allows a counterfactual to be true even in the presence of relevant -worlds, as long such exceptions are not too widely spread. The semantics is made precise and studied in different versions which are related to each other by representation theorems. Despite its (...) probabilistic nature, we show that the semantics and the resulting system of logic may be regarded as a naturalistically vindicated variant of David Lewis work. We argue that counterfactuals have two kinds of pragmatic meanings and come attached with two types of degrees of acceptability or belief, one being suppositional, the other one being truth based as determined by our probabilistic semantics; these degrees could not always coincide due to a new triviality result for counterfactuals, and they should not be identified in the light of their different interpretation and pragmatic purpose. However, for plain assertability the difference between them does not matter. Hence, if the suppositional theory of counterfactuals is formulated with sufficient care, our truth-conditional theory of counterfactuals is consistent with it. The results of our investigation are used to assess a claim considered by Hawthorne and Hájek, that is, the thesis that most ordinary counterfactuals are false. (shrink)
The literatures on both authentic leadership and behavioral integrity have argued that leader integrity drives follower performance. Yet, despite overlap in conceptualization and mechanisms, no research has investigated how authentic leadership and behavioral integrity relate to one another in driving follower performance. In this study, we propose and test the notion that authentic leadership behavior is an antecedent to perceptions of leader behavioral integrity, which in turn affects follower affective organizational commitment and follower work role performance. Analysis of a survey (...) of 49 teams in the service industry supports the proposition that authentic leadership is related to follower affective organizational commitment, fully mediated through leader behavioral integrity. Next, we found that authentic leadership and leader behavioral integrity are related to follower work role performance, fully mediated through follower affective organizational commitment. These relationships hold when controlling for ethical organizational culture. (shrink)
This article introduces, studies, and applies a new system of logic which is called ‘HYPE’. In HYPE, formulas are evaluated at states that may exhibit truth value gaps and truth value gluts. Simple and natural semantic rules for negation and the conditional operator are formulated based on an incompatibility relation and a partial fusion operation on states. The semantics is worked out in formal and philosophical detail, and a sound and complete axiomatization is provided both for the propositional and the (...) predicate logic of the system. The propositional logic of HYPE is shown to contain first-degree entailment, to have the Finite Model Property, to be decidable, to have the Disjunction Property, and to extend intuitionistic propositional logic conservatively when intuitionistic negation is defined appropriately by HYPE’s logical connectives. Furthermore, HYPE’s first-order logic is a conservative extension of intuitionistic logic with the Constant Domain Axiom, when intuitionistic negation is again defined appropriately. The system allows for simple model constructions and intuitive Euler-Venn-like diagrams, and its logical structure matches structures well-known from ordinary mathematics, such as from optimization theory, combinatorics, and graph theory. HYPE may also be used as a general logical framework in which different systems of logic can be studied, compared, and combined. In particular, HYPE is found to relate in interesting ways to classical logic and various systems of relevance and paraconsistent logic, many-valued logic, and truthmaker semantics. On the philosophical side, if used as a logic for theories of type-free truth, HYPE is shown to address semantic paradoxes such as the Liar Paradox by extending non-classical fixed-point interpretations of truth by a conditional as well-behaved as that of intuitionistic logic. Finally, HYPE may be used as a background system for modal operators that create hyperintensional contexts, though the details of this application need to be left to follow-up work. (shrink)
This paper suggests a bridge principle for all-or-nothing belief and degrees of belief to the effect that belief corresponds to stably high degree of belief. Different ways of making this Humean thesis on belief precise are discussed, and one of them is shown to stand out by unifying the others. The resulting version of the thesis proves to be fruitful in entailing the logical closure of belief, the Lockean thesis on belief, and coherence between decision-making based on all-or-nothing beliefs and (...) on degrees of belief. (shrink)
Young children interpret some acts performed by adults as normatively governed, that is, as capable of being performed either rightly or wrongly. In previous experiments, children have made this interpretation when adults introduced them to novel acts with normative language (e.g. ‘this is the way it goes’), along with pedagogical cues signaling culturally important information, and with social-pragmatic marking that this action is a token of a familiar type. In the current experiment, we exposed children to novel actions with no (...) normative language, and we systematically varied pedagogical and social-pragmatic cues in an attempt to identify which of them, if either, would lead children to normative interpretations. We found that young 3-year-old children inferred normativity without any normative language and without any pedagogical cues. The only cue they used was adult socialpragmatic marking of the action as familiar, as if it were a token of a well-known type (as opposed to performing it, as if inventing it on the spot). These results suggest that – in the absence of explicit normative language – young children interpret adult actions as normatively governed based mainly on the intentionality (perhaps signaling conventionality) with which they are performed. (shrink)
This paper explores two non-standard supermajority rules in the context of judgment aggregation over multiple logically connected issues. These rules set the supermajority threshold in a local, context sensitive way—partly as a function of the input profile of opinions. To motivate the interest of these rules, I prove two results. First, I characterize each rule in terms of a condition I call ‘Block Preservation’. Block preservation says that if a majority of group members accept a judgment set, then so should (...) the group. Second, I show that one of these rules is, in a precise sense, a judgment aggregation analogue of a rule for connecting qualitative and quantitative belief that has been recently defended by Hannes Leitgeb. The structural analogy is due to the fact that Leitgeb sets thresholds for qualitative beliefs in a local, context sensitive way—partly as a function of the given credence function. (shrink)
Drawing on an idea proposed by Darwin, it has recently been hypothesised that violent intergroup conflict might have played a substantial role in the evolution of human cooperativeness and altruism. The central notion of this argument, dubbed ‘parochial altruism’, is that the two genetic or cultural traits, aggressiveness against out-groups and cooperativeness towards the in-group, including self-sacrificial altruistic behaviour, might have coevolved in humans. This review assesses the explanatory power of current theories of ‘parochial altruism’. After a brief synopsis of (...) the existing literature, two pitfalls in the interpretation of the most widely used models are discussed: potential direct benefits and high relatedness between group members implicitly induced by assumptions about conflict structure and frequency. Then, a number of simplifying assumptions made in the construction of these models are pointed out which currently limit their explanatory power. Next, relevant empirical evidence from several disciplines which could guide future theoretical extensions is reviewed. Finally, selected alternative accounts of evolutionary links between intergroup conflict and intragroup cooperation are briefly discussed which could be integrated with parochial altruism in the future. (shrink)
Rudolf Carnap's Der logische Aufbau der Welt (The Logical Structure of the World) is generally conceived of as being the failed manifesto of logical positivism. In this paper we will consider the following question: How much of the Aufbau can actually be saved? We will argue that there is an adaptation of the old system which satisfies many of the demands of the original programme. In order to defend this thesis, we have to show how a new 'Aufbau-like' programme may (...) solve or circumvent the problems that affected the original Aufbau project. In particular, we are going to focus on how a new system may address the well-known difficulties in Carnap's Aufbau concerning abstraction, dimensionality, and theoretical terms. (shrink)