Background: Empirical studies in Muslim communities on organ donation and blood transfusion show that Muslim counsellors play an important role in the decision process. Despite the emerging importance of online English Sunni fatwas, these fatwas on organ donation and blood transfusion have hardly been studied, thus creating a gap in our knowledge of contemporary Islamic views on the subject. Method: We analysed 70 English Sunni e-fatwas and subjected them to an in-depth text analysis in order to reveal the key concepts (...) in the Islamic ethical framework regarding organ donation and blood transfusion. Results: All 70 fatwas allow for organ donation and blood transfusion. Autotransplantation is no problem at all if done for medical reasons. Allotransplantation, both from a living and a dead donor, appears to be possible though only in quite restricted ways. Xenotransplantation is less often mentioned but can be allowed in case of necessity. Transplantation in general is seen as an ongoing form of charity. Nearly half of the fatwas allowing blood transfusion do so without mentioning any restriction or problem whatsoever. The other half of the fatwas on transfusion contain the same conditional approval as found in the arguments pro organ transplantation. Conclusion: Our findings are very much in line with the international literature on the subject. We found two new elements: debates on the definition of the moment of death are hardly mentioned in the English Sunni fatwas and organ donation and blood transfusion are presented as an ongoing form of charity. (shrink)
The Art of Living Consciously Is an Operating Manual for Our Basic Tool of Survival In The Art of Living Consciously, Dr. Nathaniel Branden, our foremost authority on self-esteem, takes us into new territory, exploring the actions of our minds when they are operating as our life and well-being require -- and also when they are not. No other book illuminates so clearly what true mindfulness means: * In the workplace * In the arena of romantic love * In (...) child-rearing * In the pursuit of personal development Today we are exposed to an unprecedented amount of information and an unprecedented number of opinions about every conceivable aspect of life. We are thrown on our own resources as never before -- and we have nothing to protect us but the clarity of our thinking. In The Art of Living Consciously, Branden gives us the tools with which to draw out the best within us. (shrink)
These comments, on the paper by Branden Thornhill-Miller and Peter Millican 1 and on the critique of that paper by Janusz Salamon 2, divide into four sections. In the first two sections, I briefly sketch some of the major themes from the paper by Thornhill-Miller and Millican, and then from the critique by Salamon. In the final two sections, I provide some critical thoughts on Salamon’s objections to Thornhill-Miller and Millican, and then on the leading claims made by Thornhill-Miller (...) and Millican. I find much to commend, but also some things to dispute, in both papers. As is so often the way, I shall focus on areas of disagreement. (shrink)
Taking Joyce’s (1998; 2009) recent argument(s) for probabilism as our point of departure, we propose a new way of grounding formal, synchronic, epistemic coherence requirements for (opinionated) full belief. Our approach yields principled alternatives to deductive consistency, sheds new light on the preface and lottery paradoxes, and reveals novel conceptual connections between alethic and evidential epistemic norms.
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of non-equivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of (...) Bayesian confirmation-theoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. (shrink)
According to Bayesian confirmation theory, evidence E (incrementally) confirms (or supports) a hypothesis H (roughly) just in case E and H are positively probabilistically correlated (under an appropriate probability function Pr). There are many logically equivalent ways of saying that E and H are correlated under Pr. Surprisingly, this leads to a plethora of non-equivalent quantitative measures of the degree to which E confirms H (under Pr). In fact, many non-equivalent Bayesian measures of the degree to which E confirms (or (...) supports) H have been proposed and defended in the literature on inductive logic. I provide a thorough historical survey of the various proposals, and a detailed discussion of the philosophical ramifications of the differences between them. I argue that the set of candidate measures can be narrowed drastically by just a few intuitive and simple desiderata. In the end, I provide some novel and compelling reasons to think that the correct measure of degree of evidential support (within a Bayesian framework) is the (log) likelihood ratio. The central analyses of this research have had some useful and interesting byproducts, including: (i ) a new Bayesian account of (confirmationally) independent evidence, which has applications to several important problems in con- firmation theory, including the problem of the (confirmational) value of evidential diversity, and (ii ) novel resolutions of several problems in Bayesian confirmation theory, motivated by the use of the (log) likelihood ratio measure, including a reply to the Popper-Miller critique of probabilistic induction, and a new analysis and resolution of the problem of irrelevant conjunction (a.k.a., the tacking problem). (shrink)
Let E be a set of n propositions E1, ..., En. We seek a probabilistic measure C(E) of the ‘degree of coherence’ of E. Intuitively, we want C to be a quantitative, probabilistic generalization of the (deductive) logical coherence of E. So, in particular, we require C to satisfy the following..
To the extent that we have reasons to avoid these “bad B -properties”, these arguments provide reasons not to have an incoherent credence function b — and perhaps even reasons to have a coherent one. But, note that these two traditional arguments for probabilism involve what might be called “pragmatic” reasons (not) to be (in)coherent. In the case of the Dutch Book argument, the “bad” property is pragmatically bad (to the extent that one values money). But, it is not clear (...) whether the DBA pinpoints any epistemic defect of incoherent agents. The same can be said for Representation Theorem arguments, since they involve the structure of an agent’s preferences. (shrink)
Several forms of symmetry in degrees of evidential support areconsidered. Some of these symmetries are shown not to hold in general. This has implications for the adequacy of many measures of degree ofevidential support that have been proposed and defended in the philosophical literature.
Logic has traditionally been construed as a normative discipline; it sets forth standards of correct reasoning. Explosion is a valid principle of classical logic. It states that an inconsistent set of propositions entails any proposition whatsoever. However, ordinary agents presumably do — occasionally, at least — have inconsistent belief sets. Yet it is false that such agents may, let alone ought to, believe any proposition they please. Therefore, our logic should not recognize explosion as a logical law. Call this the (...) ‘normative argument against explosion’. Arguments of this type play — implicitly or explicitly — a central role in motivating paraconsistent logics. Branden Fitelson, in a throwaway remark, has conjectured that there is no plausible ‘bridge principle’ articulating the normative link between logic and reasoning capable of supporting such arguments. This paper offers a critical evaluation of Fitelson’s conjecture, and hence of normative arguments for paraconsistency and the conceptions of logic’s normative status on which they repose. It is argued that Fitelson’s conjecture turns out to be correct: normative arguments for paraconsistency probably fail. (shrink)
In this paper, we compare and contrast two methods for the revision of qualitative beliefs. The first method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent’s posterior credences after conditionalization. The second method is the orthodox AGM approach to belief revision. Our primary aim is to determine when the two methods may disagree in their recommendations and when they must agree. We establish a number of novel results about their relative behavior. Our most notable finding (...) is that the inverse of the golden ratio emerges as a non-arbitrary bound on the Bayesian method’s free-parameter—the Lockean threshold. This “golden threshold” surfaces in two of our results and turns out to be crucial for understanding the relation between the two methods. (shrink)
In this paper, we investigate various possible (Bayesian) precisifications of the (somewhat vague) statements of “the equal weight view” (EWV) that have appeared in the recent literature on disagreement. We will show that the renditions of (EWV) that immediately suggest themselves are untenable from a Bayesian point of view. In the end, we will propose some tenable (but not necessarily desirable) interpretations of (EWV). Our aim here will not be to defend any particular Bayesian precisification of (EWV), but rather to (...) raise awareness about some of the difficulties inherent in formulating such precisifications. (shrink)
In this note, I consider various precisifications of the slogan ‘evidence of evidence is evidence’. I provide counter-examples to each of these precisifications (assuming an epistemic probabilistic relevance notion of ‘evidential support’).
Inquiry into the meaning of logical terms in natural language (‘and’, ‘or’, ‘not’, ‘if’) has generally proceeded along two dimensions. On the one hand, semantic theories aim to predict native speaker intuitions about the natural language sentences involving those logical terms. On the other hand, logical theories explore the formal properties of the translations of those terms into formal languages. Sometimes, these two lines of inquiry appear to be in tension: for instance, our best logical investigation into conditional connectives may (...) show that there is no conditional operator that has all the properties native speaker intuitions suggest if has. Indicative conditionals have famously been the source of one such tension, ever since the triviality proofs of both Lewis (1976) and Gibbard (1981) established conclusions which are in prima facie tension with ordinary judgments about natural language indicative conditionals. In a recent series of papers, Branden Fitelson has strengthened both triviality results (Fitelson 2013, 2015, 2016), revealing a common culprit: a logical schema known as IMPORT-EXPORT. Fitelson’s results focus the tension between the logical results and ordinary judgments, since IMPORT-EXPORT seems to be supported by intuitions about natural language. In this paper, we argue that the intuitions which have been taken to support IMPORT-EXPORT are really evidence for a closely related, but subtly different, principle. We show that the two principles are independent by showing how, given a standard assumption about the conditional operator in the formal language in which IMPORT-EXPORT is stated, many existing theories of indicative conditionals validate one, but not the other. Moreover, we argue that once we clearly distinguish these principles, we can use propositional anaphora to show that IMPORT-EXPORT is in fact not valid for natural language indicative conditionals (given this assumption about the formal conditional operator). This gives us a principled and independently motivated way of rejecting a crucial premise in many triviality results, while still making sense of the speaker intuitions which appeared to motivate that premise. We suggest that this strategy has broad application and an important lesson: in theorizing about the logic of natural language, we must pay careful attention to the translation between the formal languages in which logical results are typically proved, and natural languages which are the subject matter of semantic theory. (shrink)
Likelihoodists and Bayesians seem to have a fundamental disagreement about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support (or confirmation). In this paper, I will survey some recent arguments and results in this area, with an eye toward pinpointing the nexus of the dispute. This will lead, first, to an important shift in the way the debate has been couched, and, second, to an alternative explication of relational support, which is in some sense a "middle way" (...) between Likelihoodism and Bayesianism. In the process, I will propose some new work for an old probability puzzle: the "Monty Hall" problem. (shrink)
The conjunction fallacy has been a key topic in debates on the rationality of human reasoning and its limitations. Despite extensive inquiry, however, the attempt to provide a satisfactory account of the phenomenon has proved challenging. Here we elaborate the suggestion (first discussed by Sides, Osherson, Bonini, & Viale, 2002) that in standard conjunction problems the fallacious probability judgements observed experimentally are typically guided by sound assessments of _confirmation_ relations, meant in terms of contemporary Bayesian confirmation theory. Our main formal (...) result is a confirmation-theoretic account of the conjunction fallacy, which is proven _robust_ (i.e., not depending on various alternative ways of measuring degrees of confirmation). The proposed analysis is shown distinct from contentions that the conjunction effect is in fact not a fallacy, and is compared with major competing explanations of the phenomenon, including earlier references to a confirmation-theoretic account. (shrink)
A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C. S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent evidence. I argue that, by considering further natural constraints on a probabilistic account of independent evidence, all but a very small class of Bayesian measures of confirmation can be ruled out. In closing, another application of (...) my account to the problem of evidential diversity is also discussed. (shrink)
outlined. This account is partly inspired by the work of C.S. Peirce. When we want to consider how degree of confirmation varies with changing I show that a large class of quantitative Bayesian measures of con-.
First, a brief historical trace of the developments in confirmation theory leading up to Goodman's infamous "grue" paradox is presented. Then, Goodman's argument is analyzed from both Hempelian and Bayesian perspectives. A guiding analogy is drawn between certain arguments against classical deductive logic, and Goodman's "grue" argument against classical inductive logic. The upshot of this analogy is that the "New Riddle" is not as vexing as many commentators have claimed. Specifically, the analogy reveals an intimate connection between Goodman's problem, and (...) the "problem of old evidence". Several other novel aspects of Goodman's argument are also discussed. (shrink)
Note: This is not an ad hoc change at all. It’s simply the natural thing say here – if one thinks of F as a generalization of classical logical entailment. The extra complexity I had in my original (incorrect) deﬁnition of F was there because I was foolishly trying to encode some non-classical, or “relavant” logical structure in F. I now think this is a mistake, and that I should go with the above, classical account of F. Arguments about relevance (...) logic need to be handled in a diﬀerent way (and a diﬀerent context!). And, besides, as Luca Moretti has shown (see below), the original deﬁnition of F cannot be the right basis for C ! OK, now on to C. (shrink)
Naive deductive accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H & X, for any X—even if X is utterly irrelevant to H (and E). Bayesian accounts of confirmation also have this property (in the case of deductive evidence). Several Bayesians have attempted to soften the impact of this fact by arguing that—according to Bayesian accounts of confirmation— E will confirm the conjunction H & X less strongly than E confirms (...) H (again, in the case of deductive evidence). I argue that existing Bayesian “resolutions” of this problem are inadequate in several important respects. In the end, I suggest a new‐and‐improved Bayesian account (and understanding) of the problem of irrelevant conjunction. (shrink)
We give an analysis of the Monty Hall problem purely in terms of confirmation, without making any lottery assumptions about priors. Along the way, we show the Monty Hall problem is structurally identical to the Doomsday Argument.
The Paradox of the Ravens (a.k.a,, The Paradox of Confirmation) is indeed an old chestnut. A great many things have been written and said about this paradox and its implications for the logic of evidential support. The first part of this paper will provide a brief survey of the early history of the paradox. This will include the original formulation of the paradox and the early responses of Hempel, Goodman, and Quine. The second part of the paper will describe attempts (...) to resolve the paradox within a Bayesian framework, and show how to improve upon them. This part begins with a discussion of how probabilistic methods can help to clarify the statement of the paradox itself. And it describes some of the early responses to probabilistic explications. We then inspect the assumptions employed by traditional (canonical) Bayesian approaches to the paradox. These assumptions may appear to be overly strong. So, drawing on weaker assumptions, we formulate a new-and-improved Bayesian confirmation-theoretic resolution of the Paradox of the Ravens. (shrink)
In this discussion note, we explain how to relax some of the standard assumptions made in Garber-style solutions to the Problem of Old Evidence. The result is a more general and explanatory Bayesian approach.
Carnap's inductive logic (or confirmation) project is revisited from an "increase in firmness" (or probabilistic relevance) point of view. It is argued that Carnap's main desiderata can be satisfied in this setting, without the need for a theory of "logical probability." The emphasis here will be on explaining how Carnap's epistemological desiderata for inductive logic will need to be modified in this new setting. The key move is to abandon Carnap's goal of bridging confirmation and credence, in favor of bridging (...) confirmation and evidential support. (shrink)
Richard Feldman has proposed and defended different versions of a principle about evidence. In slogan form, the principle holds that ‘evidence of evidence is evidence’. Recently, Branden Fitelson has argued that Feldman’s preferred rendition of the principle falls pray to a counterexample related to the non-transitivity of the evidence-for relation. Feldman replies arguing that Fitelson’s case does not really represent a counterexample to the principle. In this note, we argue that Feldman’s principle is trivially true.