Background: Empirical studies in Muslim communities on organ donation and blood transfusion show that Muslim counsellors play an important role in the decision process. Despite the emerging importance of online English Sunni fatwas, these fatwas on organ donation and blood transfusion have hardly been studied, thus creating a gap in our knowledge of contemporary Islamic views on the subject. Method: We analysed 70 English Sunni e-fatwas and subjected them to an in-depth text analysis in order to reveal the key concepts (...) in the Islamic ethical framework regarding organ donation and blood transfusion. Results: All 70 fatwas allow for organ donation and blood transfusion. Autotransplantation is no problem at all if done for medical reasons. Allotransplantation, both from a living and a dead donor, appears to be possible though only in quite restricted ways. Xenotransplantation is less often mentioned but can be allowed in case of necessity. Transplantation in general is seen as an ongoing form of charity. Nearly half of the fatwas allowing blood transfusion do so without mentioning any restriction or problem whatsoever. The other half of the fatwas on transfusion contain the same conditional approval as found in the arguments pro organ transplantation. Conclusion: Our findings are very much in line with the international literature on the subject. We found two new elements: debates on the definition of the moment of death are hardly mentioned in the English Sunni fatwas and organ donation and blood transfusion are presented as an ongoing form of charity. (shrink)
First, a brief historical trace of the developments in confirmation theory leading up to Goodman's infamous "grue" paradox is presented. Then, Goodman's argument is analyzed from both Hempelian and Bayesian perspectives. A guiding analogy is drawn between certain arguments against classical deductive logic, and Goodman's "grue" argument against classical inductive logic. The upshot of this analogy is that the "New Riddle" is not as vexing as many commentators have claimed. Specifically, the analogy reveals an intimate connection between Goodman's problem, and (...) the "problem of old evidence". Several other novel aspects of Goodman's argument are also discussed. (shrink)
In this paper, the authors describe their initial investigations in computational metaphysics. Our method is to implement axiomatic metaphysics in an automated reasoning system. In this paper, we describe what we have discovered when the theory of abstract objects is implemented in PROVER9 (a first-order automated reasoning system which is the successor to OTTER). After reviewing the second-order, axiomatic theory of abstract objects, we show (1) how to represent a fragment of that theory in PROVER9's first-order syntax, and (2) how (...) PROVER9 then finds proofs of interesting theorems of metaphysics, such as that every possible world is maximal. We conclude the paper by discussing some issues for further research. (shrink)
In this paper, we compare and contrast two methods for the revision of qualitative beliefs. The first method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent’s posterior credences after conditionalization. The second method is the orthodox AGM approach to belief revision. Our primary aim is to determine when the two methods may disagree in their recommendations and when they must agree. We establish a number of novel results about their relative behavior. Our most notable finding (...) is that the inverse of the golden ratio emerges as a non-arbitrary bound on the Bayesian method’s free-parameter—the Lockean threshold. This “golden threshold” surfaces in two of our results and turns out to be crucial for understanding the relation between the two methods. (shrink)
This (brief) note is about the (evidential) “favoring” relation. Pre-theoretically, favoring is a three-place (epistemic) relation, between an evidential proposition E and two hypotheses H1 and H2. Favoring relations are expressed via locutions of the form: E favors H1 over H2. Strictly speaking, favoring should really be thought of as a four-place relation, between E, H1, H2, and a corpus of background evidence K. But, for present purposes (which won't address issues involving K), I will suppress the background corpus, so (...) as to simplify our discussion. Moreover, the favoring relation is meant to be a propositional epistemic relation, as opposed to a doxastic epistemic relation. That is, the favoring relation is not meant to be restricted to bodies of evidence that are possessed (as evidence) by some actual agent(s), or to hypotheses that are (in fact) entertained by some actual agent(s). In this sense, favoring is analogous to the relation of propositional justification — as opposed to doxastic justification (Conee 1980). In order to facilitate a comparison of Likelihoodist vs Bayesian explications of favoring, I will presuppose the following bridge principle, linking favoring and evidential support: • E favors H1 over H2 iff E supports H1 more strongly than E supports H2.1 Finally, I will only be discussing instances of the favoring relation involving contingent, empirical claims. So, it is to be understood that “favoring” will not apply if any of E, H1, or H2 are non-contingent (and/or non-empirical). With this background in place, we're ready to begin. (shrink)
The Paradox of the Ravens (a.k.a,, The Paradox of Confirmation) is indeed an old chestnut. A great many things have been written and said about this paradox and its implications for the logic of evidential support. The first part of this paper will provide a brief survey of the early history of the paradox. This will include the original formulation of the paradox and the early responses of Hempel, Goodman, and Quine. The second part of the paper will describe attempts (...) to resolve the paradox within a Bayesian framework, and show how to improve upon them. This part begins with a discussion of how probabilistic methods can help to clarify the statement of the paradox itself. And it describes some of the early responses to probabilistic explications. We then inspect the assumptions employed by traditional (canonical) Bayesian approaches to the paradox. These assumptions may appear to be overly strong. So, drawing on weaker assumptions, we formulate a new-and-improved Bayesian confirmation-theoretic resolution of the Paradox of the Ravens. (shrink)
Taking Joyce’s (1998; 2009) recent argument(s) for probabilism as our point of departure, we propose a new way of grounding formal, synchronic, epistemic coherence requirements for (opinionated) full belief. Our approach yields principled alternatives to deductive consistency, sheds new light on the preface and lottery paradoxes, and reveals novel conceptual connections between alethic and evidential epistemic norms.
Carnap's inductive logic (or confirmation) project is revisited from an "increase in firmness" (or probabilistic relevance) point of view. It is argued that Carnap's main desiderata can be satisfied in this setting, without the need for a theory of "logical probability." The emphasis here will be on explaining how Carnap's epistemological desiderata for inductive logic will need to be modified in this new setting. The key move is to abandon Carnap's goal of bridging confirmation and credence, in favor of bridging (...) confirmation and evidential support. (shrink)
The (recent, Bayesian) cognitive science literature on the Wason Task (WT) has been modeled largely after the (not-so-recent, Bayesian) philosophy of science literature on the Paradox of Confirmation (POC). In this paper, we apply some insights from more recent Bayesian approaches to the (POC) to analogous models of (WT). This involves, first, retracing the history of the (POC), and, then, re-examining the (WT) with these historico-philosophical insights in mind.
The (recent, Bayesian) cognitive science literature on The Wason Task (WT) has been modeled largely after the (not-so-recent, Bayesian) philosophy of science literature on The Paradox of Confirmation (POC). In this paper, we apply some insights from more recent Bayesian approaches to the (POC) to analogous models of (WT). This involves, first, retracing the history of the (POC), and, then, reexamining the (WT) with these historico-philosophical insights in mind.
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of non-equivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of (...) Bayesian confirmation-theoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. (shrink)
The conjunction fallacy has been a key topic in debates on the rationality of human reasoning and its limitations. Despite extensive inquiry, however, the attempt to provide a satisfactory account of the phenomenon has proved challenging. Here we elaborate the suggestion (first discussed by Sides, Osherson, Bonini, & Viale, 2002) that in standard conjunction problems the fallacious probability judgements observed experimentally are typically guided by sound assessments of _confirmation_ relations, meant in terms of contemporary Bayesian confirmation theory. Our main formal (...) result is a confirmation-theoretic account of the conjunction fallacy, which is proven _robust_ (i.e., not depending on various alternative ways of measuring degrees of confirmation). The proposed analysis is shown distinct from contentions that the conjunction effect is in fact not a fallacy, and is compared with major competing explanations of the phenomenon, including earlier references to a confirmation-theoretic account. (shrink)
There are two central questions concerning probability. First, what are its formal features? That is a mathematical question, to which there is a standard, widely (though not universally) agreed upon answer. This answer is reviewed in the next section. Second, what sorts of things are probabilities---what, that is, is the subject matter of probability theory? This is a philosophical question, and while the mathematical theory of probability certainly bears on it, the answer must come from elsewhere. To see why, observe (...) that there are many things in the world that have the mathematical structure of probabilities---the set of measurable regions on the surface of a table, for example---but that one would never mistake for being probabilities. So probability is distinguished by more than just its formal characteristics. The bulk of this essay will be taken up with the central question of what this “more” might be. (shrink)
According to Bayesian confirmation theory, evidence E (incrementally) confirms (or supports) a hypothesis H (roughly) just in case E and H are positively probabilistically correlated (under an appropriate probability function Pr). There are many logically equivalent ways of saying that E and H are correlated under Pr. Surprisingly, this leads to a plethora of non-equivalent quantitative measures of the degree to which E confirms H (under Pr). In fact, many non-equivalent Bayesian measures of the degree to which E confirms (or (...) supports) H have been proposed and defended in the literature on inductive logic. I provide a thorough historical survey of the various proposals, and a detailed discussion of the philosophical ramifications of the differences between them. I argue that the set of candidate measures can be narrowed drastically by just a few intuitive and simple desiderata. In the end, I provide some novel and compelling reasons to think that the correct measure of degree of evidential support (within a Bayesian framework) is the (log) likelihood ratio. The central analyses of this research have had some useful and interesting byproducts, including: (i ) a new Bayesian account of (confirmationally) independent evidence, which has applications to several important problems in con- firmation theory, including the problem of the (confirmational) value of evidential diversity, and (ii ) novel resolutions of several problems in Bayesian confirmation theory, motivated by the use of the (log) likelihood ratio measure, including a reply to the Popper-Miller critique of probabilistic induction, and a new analysis and resolution of the problem of irrelevant conjunction (a.k.a., the tacking problem). (shrink)
To the extent that we have reasons to avoid these “bad B -properties”, these arguments provide reasons not to have an incoherent credence function b — and perhaps even reasons to have a coherent one. But, note that these two traditional arguments for probabilism involve what might be called “pragmatic” reasons (not) to be (in)coherent. In the case of the Dutch Book argument, the “bad” property is pragmatically bad (to the extent that one values money). But, it is not clear (...) whether the DBA pinpoints any epistemic defect of incoherent agents. The same can be said for Representation Theorem arguments, since they involve the structure of an agent’s preferences. (shrink)
Wayne (1995) critiques the Bayesian explication of the conﬁrmational signiﬁcance of evidential diversity (CSED) oﬀered by Horwich (1982). Presently, I argue that Wayne’s reconstruction of Horwich’s account of CSED is uncharitable. As a result, Wayne’s criticisms ultimately present no real problem for Horwich. I try to provide a more faithful and charitable rendition of Horwich’s account of CSED. Unfortunately, even when Horwich’s approach is charitably reconstructed, it is still not completely satisfying.
We give an analysis of the Monty Hall problem purely in terms of confirmation, without making any lottery assumptions about priors. Along the way, we show the Monty Hall problem is structurally identical to the Doomsday Argument.
Several forms of symmetry in degrees of evidential support areconsidered. Some of these symmetries are shown not to hold in general. This has implications for the adequacy of many measures of degree ofevidential support that have been proposed and defended in the philosophical literature.
The technological advances in medicine, including prolongation of life, have constituted several dilemmas at the end of life. In the context of the Belgian debates on end-of-life care, the views of Muslim women remain understudied. The aim of this article is fourfold. First, we seek to describe the beliefs and attitudes of middle-aged and elderly Moroccan Muslim women toward withholding and withdrawing life-sustaining treatments. Second, we aim to identify whether differences are observable among middle-aged and elderly women’s attitudes toward withholding (...) and withdrawing life-sustaining treatments. Third, we aim to explore the role of religion in their attitudes. Fourth, we seek to document how our results are related to normative Islamic literature. Qualitative empirical research was conducted with a sample of middle-aged and elderly Moroccan Muslim women living in Antwerp and with experts in the field. We found an unconditional belief in God’s sovereign power over the domain of life and death and in God’s almightiness. However, we also found a tolerant attitude, mainly among our middle-aged participants, toward withholding and withdrawing based on theological, eschatological, financial and quality of life arguments. Our study reveals that religious beliefs and worldviews have a great impact on the ethical attitudes toward end-of-life issues. We found divergent positions toward withholding and withdrawing life-sustaining treatments, reflecting the lines of reasoning found in normative Islamic literature. In our interviews, theological and eschatological notions emerged as well as financial and quality of life arguments. (shrink)
This essay presents the experimental subject as a figure of modernity. It addresses notions of control, sensory thresholds, automatism, and human agency through a study of experimental psychology and psychological apparatus from the late 19th century to the First World War, juxtaposing this with notions of experimentation in early 20th-century avant-garde movements. The human subject of experimental psychology, defined by its inexpression as it awaits the stimuli of testing and measurement, is treated as a prototype for the present-day user of (...) technological interfaces. (shrink)
Let E be a set of n propositions E1, ..., En. We seek a probabilistic measure C(E) of the ‘degree of coherence’ of E. Intuitively, we want C to be a quantitative, probabilistic generalization of the (deductive) logical coherence of E. So, in particular, we require C to satisfy the following..
In this note, I consider various precisifications of the slogan ‘evidence of evidence is evidence’. I provide counter-examples to each of these precisifications (assuming an epistemic probabilistic relevance notion of ‘evidential support’).
In this paper, we investigate various possible (Bayesian) precisifications of the (somewhat vague) statements of “the equal weight view” (EWV) that have appeared in the recent literature on disagreement. We will show that the renditions of (EWV) that immediately suggest themselves are untenable from a Bayesian point of view. In the end, we will propose some tenable (but not necessarily desirable) interpretations of (EWV). Our aim here will not be to defend any particular Bayesian precisification of (EWV), but rather to (...) raise awareness about some of the difficulties inherent in formulating such precisifications. (shrink)
Likelihoodists and Bayesians seem to have a fundamental disagreement about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support (or confirmation). In this paper, I will survey some recent arguments and results in this area, with an eye toward pinpointing the nexus of the dispute. This will lead, first, to an important shift in the way the debate has been couched, and, second, to an alternative explication of relational support, which is in some sense a "middle way" (...) between Likelihoodism and Bayesianism. In the process, I will propose some new work for an old probability puzzle: the "Monty Hall" problem. (shrink)
According to orthodox (Kolmogorovian) probability theory, conditional probabilities are by definition certain ratios of unconditional probabilities. As a result, orthodox conditional probabilities are undefined whenever their antecedents have zero unconditional probability. This has important ramifications for the notion of probabilistic independence. Traditionally, independence is defined in terms of unconditional probabilities (the factorization of the relevant joint unconditional probabilities). Various “equivalent” formulations of independence can be given using conditional probabilities. But these “equivalences” break down if conditional probabilities are permitted to have (...) conditions with zero unconditional probability. We reconsider probabilistic independence in this more general setting. We argue that a less orthodox but more general (Popperian) theory of conditional probability should be used, and that much of the conventional wisdom about probabilistic independence needs to be rethought. (shrink)
A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C. S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent evidence. I argue that, by considering further natural constraints on a probabilistic account of independent evidence, all but a very small class of Bayesian measures of confirmation can be ruled out. In closing, another application of (...) my account to the problem of evidential diversity is also discussed. (shrink)
outlined. This account is partly inspired by the work of C.S. Peirce. When we want to consider how degree of confirmation varies with changing I show that a large class of quantitative Bayesian measures of con-.
The program MaGIC (Matrix Generator for Implication Connectives) is intended as a tool for logical research. It computes small algebras (normally with up to 14 elements) suitable for modelling certain non-classical logics. Along the way, it eliminates from the output any algebra isomorphic to one already generated, thus returning only one from each isomorphism class. Optionally, the user may specify a formula which is to be..
In this discussion note, we explain how to relax some of the standard assumptions made in Garber-style solutions to the Problem of Old Evidence. The result is a more general and explanatory Bayesian approach.
Note: This is not an ad hoc change at all. It’s simply the natural thing say here – if one thinks of F as a generalization of classical logical entailment. The extra complexity I had in my original (incorrect) deﬁnition of F was there because I was foolishly trying to encode some non-classical, or “relavant” logical structure in F. I now think this is a mistake, and that I should go with the above, classical account of F. Arguments about relevance (...) logic need to be handled in a diﬀerent way (and a diﬀerent context!). And, besides, as Luca Moretti has shown (see below), the original deﬁnition of F cannot be the right basis for C ! OK, now on to C. (shrink)
Naive deductive accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H & X, for any X—even if X is utterly irrelevant to H (and E). Bayesian accounts of confirmation also have this property (in the case of deductive evidence). Several Bayesians have attempted to soften the impact of this fact by arguing that—according to Bayesian accounts of confirmation— E will confirm the conjunction H & X less strongly than E confirms (...) H (again, in the case of deductive evidence). I argue that existing Bayesian “resolutions” of this problem are inadequate in several important respects. In the end, I suggest a new‐and‐improved Bayesian account (and understanding) of the problem of irrelevant conjunction. (shrink)
In Chapter 12 of Warrant and Proper Function, Alvin Plantinga constructs two arguments against evolutionary naturalism, which he construes as a conjunction E&N .The hypothesis E says that “human cognitive faculties arose by way of the mechanisms to which contemporary evolutionary thought directs our attention (p.220).”1 With respect to proposition N , Plantinga (p. 270) says “it isn’t easy to say precisely what naturalism is,” but then adds that “crucial to metaphysical naturalism, of course, is the view that there is (...) no such person as the God of traditional theism.” Plantinga tries to cast doubt on the conjunction E&N in two ways.His “preliminary argument” aims to show that the conjunction is probably false, given the fact (R) that our psychological mechanisms for forming beliefs about the world are generally reliable.His “main argument” aims to show that the conjunction E&N is self-defeating — if you believe E&N , then you should stop believing that conjunction.Plantinga further develops the main argument in his unpublished paper “Naturalism Defeated” (Plantinga 1994).We will try to show that both arguments contain serious errors. (shrink)
Hempel first introduced the paradox of confirmation in (Hempel 1937). Since then, a very extensive literature on the paradox has evolved (Vranas 2004). Much of this literature can be seen as responding to Hempel’s subsequent discussions and analyses of the paradox in (Hempel 1945). Recently, it was noted that Hempel’s intuitive (and plausible) resolution of the paradox was inconsistent with his official theory of confirmation (Fitelson & Hawthorne 2006). In this article, we will try to explain how this inconsistency affects (...) the historical dialectic about the paradox and how it illuminates the nature of confirmation. In the end, we will argue that Hempel’s intuitions about the paradox of confirmation were (basically) correct, and that it is his theory that should be rejected, in favor of a (broadly) Bayesian account of confirmation. (shrink)
There are various questions that arise in connection with the “intelligent design” (ID) controversy. This introductory section aims to distinguish five of these questions. Later sections are devoted to detailed discussions of each of these five questions. The first (and central) question is the one that has been discussed most frequently in the news lately: (Q1) Should ID be taught in our public schools? It is helpful to break this general “public school curriculum question” into the following two more specific (...) sub-questions: (Q1.1) Should ID be included in the science curriculum of our public schools? (Q1.2) Should ID be included in some part of our public school curriculum? Of course, these public school curriculum questions should be distinguished from other (perhaps related, but distinct) questions that are often asked about ID. Here is another question that is very frequently discussed, not only in the debate about (Q1), but in the ID controversy generally: (Q2) What is ID? In other words, is ID a scientific theory, a religious doctrine, an.. (shrink)
Bayesian epistemology suggests various ways of measuring the support that a piece of evidence provides a hypothesis. Such measures are defined in terms of a subjective probability assignment, pr, over propositions entertained by an agent. The most standard measure (where “H” stands for “hypothesis” and “E” stands for “evidence”) is: the difference measure: d(H,E) = pr(H/E) - pr(H).0 This may be called a “positive (probabilistic) relevance measure” of confirmation, since, according to it, a piece of evidence E qualitatively confirms a (...) hypothesis H if and only if pr(H/E) > pr(H), where qualitative disconfirmation is characterized by replacing “>” with “ “ with “=”. Other more or less standard positive relevance measures that have been proposed are: the log-ratio measure: r(H,E) = log[pr(H/E)/pr(H)] and the log-likelihood-ratio measure: l(H,E) = log[pr(E/H)/pr(E/~H)]. (shrink)
Naive deductivist accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H·X, for any X—even if X is completely irrelevant to E and H. Bayesian accounts of confirmation may appear to have the same problem. In a recent article in this journal Fitelson (2002) argued that existing Bayesian attempts to resolve of this problem are inadequate in several important respects. Fitelson then proposes a new‐and‐improved Bayesian account that overcomes the problem of (...) irrelevant conjunction, and does so in a more general setting than past attempts. We will show how to simplify and improve upon Fitelson's solution. (shrink)
In this review of empirical studies we aimed to assess the influence of religion and world view on nurses' attitudes towards euthanasia and physician assisted suicide. We searched PubMed for articles published before August 2008 using combinations of search terms. Most identified studies showed a clear relationship between religion or world view and nurses' attitudes towards euthanasia or physician assisted suicide. Differences in attitude were found to be influenced by religious or ideological affiliation, observance of religious practices, religious doctrines, and (...) personal importance attributed to religion or world view. Nevertheless, a coherent comparative interpretation of the results of the identified studies was difficult. We concluded that no study has so far exhaustively investigated the relationship between religion or world view and nurses' attitudes towards euthanasia or physician assisted suicide and that further research is required. (shrink)
In the ﬁrst edition of LFP, Carnap  undertakes a precise probabilistic explication of the concept of conﬁrmation. This is where modern conﬁrmation theory was born (in sin). Carnap was interested mainly in quantitative conﬁrmation (which he took to be fundamental). But, he also gave (derivative) qualitative and comparative explications: • Qualitative. E inductively supports H. • Comparative. E supports H more strongly than E supports H . • Quantitative. E inductively supports H to degree r . Carnap begins by (...) clarifying the explicandum (the informal “inductive support” concept) in various ways, including. (shrink)