Background: As actors with the key responsibility for the protection of human research participants, Research Ethics Committees (RECs) need to be competent and well-resourced in order to fulfil their roles. Despite recent programs designed to strengthen RECs in Africa, much more needs to be accomplished before these committees can function optimally.Objective: To assess training needs for biomedical research ethics evaluation among targeted countries.Methods: Members of RECs operating in three targeted African countries were surveyed between August and November 2007. Before implementing (...) the survey, ethical approvals were obtained from RECs in Switzerland, Cameroon, Mali and Tanzania. Data were collected using a semi-structured questionnaire in English and in French.Results: A total of 74 respondents participated in the study. The participation rate was 68%. Seventy one percent of respondents reported having received some training in research ethics evaluation. This training was given by national institutions (31%) and international institutions (69%). Researchers and REC members were ranked as the top target audiences to be trained. Of 32 topics, the top five training priorities were: basic ethical principles, coverage of applicable laws and regulations, how to conduct ethics review, evaluating informed consent processes and the role of the REC.Conclusion: Although the majority of REC members in the targeted African countries had received training in ethics, they expressed a need for additional training. The results of this survey have been used to design a training program in research ethics evaluation that meets this need. (shrink)
Moral thinking pervades our practical lives, but where did this way of thinking come from, and what purpose does it serve? Is it to be explained by environmental pressures on our ancestors a million years ago, or is it a cultural invention of more recent origin? In The Evolution of Morality, Richard Joyce takes up these controversial questions, finding that the evidence supports an innate basis to human morality. As a moral philosopher, Joyce is interested in whether any (...) implications follow from this hypothesis. Might the fact that the human brain has been biologically prepared by natural selection to engage in moral judgment serve in some sense to vindicate this way of thinking--staving off the threat of moral skepticism, or even undergirding some version of moral realism? Or if morality has an adaptive explanation in genetic terms--if it is, as Joyce writes, "just something that helped our ancestors make more babies"--might such an explanation actually undermine morality's central role in our lives? He carefully examines both the evolutionary "vindication of morality" and the evolutionary "debunking of morality," considering the skeptical view more seriously than have others who have treated the subject.Interdisciplinary and combining the latest results from the empirical sciences with philosophical discussion, The Evolution of Morality is one of the few books in this area written from the perspective of moral philosophy. Concise and without technical jargon, the arguments are rigorous but accessible to readers from different academic backgrounds. Joyce discusses complex issues in plain language while advocating subtle and sometimes radical views. The Evolution of Morality lays the philosophical foundations for further research into the biological understanding of human morality. (shrink)
In The Myth of Morality, Richard Joyce argues that moral discourse is hopelessly flawed. At the heart of ordinary moral judgements is a notion of moral inescapability, or practical authority, which, upon investigation, cannot be reasonably defended. Joyce argues that natural selection is to blame, in that it has provided us with a tendency to invest the world with values that it does not contain, and demands that it does not make. Should we therefore do away with morality, (...) as we did away with other faulty notions such as witches? Possibly not. We may be able to carry on with morality as a 'useful fiction' - allowing it to have a regulative influence on our lives and decisions, perhaps even playing a central role - while not committing ourselves to believing or asserting falsehoods, and thus not being subject to accusations of 'error'. (shrink)
Moral skepticism is the denial that there is any such thing as moral knowledge. Since the publication of The Myth of Morality in 2001, Richard Joyce has explored the terrain of moral skepticism and has been willing to advocate versions of this radical view. Joyce's attitude toward morality is analogous to an atheist's attitude toward religion: he claims that in making moral judgments speakers attempt to state truths but that the world isn't furnished with the properties and relations (...) necessary to render such judgments true. Moral thinking probably emerged as a human adaptation, but one whose usefulness derived from its capacity to bolster social cohesion rather than its ability to track truths about the world. Essays in Moral Skepticism gathers together a dozen of Joyce's most significant papers from the last decade, following the developments in his ideas, presenting responses to critics, and charting his exploration of the complex landscape of modern moral skepticism. (shrink)
Bayesianism claims to provide a unified theory of epistemic and practical rationality based on the principle of mathematical expectation. In its epistemic guise it requires believers to obey the laws of probability. In its practical guise it asks agents to maximize their subjective expected utility. Joyce’s primary concern is Bayesian epistemology, and its five pillars: people have beliefs and conditional beliefs that come in varying gradations of strength; a person believes a proposition strongly to the extent that she presupposes (...) its truth in her practical and theoretical reasoning; rational graded beliefs must conform to the laws of probability; evidential relationships should be analyzed subjectively in terms of relations among a person’s graded beliefs and conditional beliefs; empirical learning is best modeled as probabilistic conditioning. Joyce explains each of these claims and evaluates some of the justifications that have been offered for them, including “Dutch book,” “decision-theoretic,” and “non-pragmatic” arguments for and. He also addresses some common objections to Bayesianism, in particular the “problem of old evidence” and the complaint that the view degenerates into an untenable subjectivism. The essay closes by painting a picture of Bayesianism as an “internalist” theory of reasons for action and belief that can be fruitfully augmented with “externalist” principles of practical and epistemic rationality. (shrink)
To hold an error theory about morality is to endorse a kind of radical moral skepticism—a skepticism analogous to atheism in the religious domain. The atheist thinks that religious utterances, such as “God loves you,” really are truth-evaluable assertions (as opposed to being veiled commands or expressions of hope, etc.), but that the world just doesn’t contain the items (e.g., God) necessary to render such assertions true. Similarly, the moral error theorist maintains that moral judgments are truth-evaluable assertions (thus contrasting (...) with the noncognitivist), but that the world doesn’t contain the properties (e.g., moral goodness, evil, moral obligation) needed to render moral judgments true. In other words, moral discourse aims at the truth but systematically fails to secure it. If there is no such property as moral wrongness, for example, then no judgment of the form “X is morally wrong” will be true (where “X” denotes an actual action or state of affairs). Advocates of this position include Hinckfuss 1987; Joyce 2001; Mackie 1977 (see MACKIE, J. L.). Various forms of moral skepticism—some of which are arguably instances of the error theoretic stance—have been familiar to philosophers since ancient times. (See SKEPTICISM, MORAL.) Error theoretic views can be controversial—as in the case of religion and morality—or widely agreed upon—as in the case of ghosts and phlogiston. It is important to note that error theorists maintain that the judgments in question are erroneous not merely because of the absence of any objective moral facts sufficient to render them true, but also because of the absence of any non-objective moral facts sufficient to render them true. There is, for example, a kind of moral realist who maintains that moral properties are objective features of the universe (see REALISM, MORAL). There is also a family of metaethical views according to which moral properties are in some manner constituted by us—by our beliefs, attitudes, practices, etc.. (shrink)
“Nihilism” (from the Latin “nihil” meaning nothing) is not a well-defined term. One can be a nihilist about just about anything: A philosopher who does not believe in the existence of knowledge, for example, might be called an “epistemological nihilist”; an atheist might be called a “religious nihilist.” In the vicinity of ethics, one should take care to distinguish moral nihilism from political nihilism and from existential nihilism. These last two will be briefly discussed below, only with the aim of (...) clarifying our topic: moral nihilism. Even restricting attention to “moral nihilism,” matters remain indeterminate. Its most prominent usage in the field of metaethics treats it as a synonym for “error theory,” therefore an entry that said only “Nihilism: see ERROR THEORY” would not be badly misleading. This would identify moral nihilism as the metaethical view that moral discourse consists of assertions that systematically fail to secure the truth. (See Mackie 1977; Joyce 2001.) A broader definition of “nihilism” would be “the view that there are no moral facts.” This is broader because it covers not only the error theory but also noncognitivism (see NONCOGNITIVISM). Both these theories deny that there are moral facts—the difference being that the error theorist thinks that in making moral judgments we try to state facts (but fail to do so, because there are no facts of the type in question), whereas the noncognitivist thinks that in making moral judgments we do not even try to state facts (because, for example, these judgments are really veiled commands or expressions of desire). (In characterizing noncognitivism in this way, I am sidelining various linguistic permissions that may be earned via the quasi-realist program (see QUASI-REALISM).) While it is not uncommon to see “nihilism” defined in this broader way, few contemporary noncognitivists think of themselves as “nihilists,” so it is reasonable to suspect that the extra breadth of the definition is often unintentional. Both these characterizations see moral nihilism as a purely metaethical thesis...n. (shrink)
Joyce, Rosemarie Since the middle of last century, there has been a gradual change in Australian society with regard to how one understands and practises authority and obedience. In the past, those who were in positions of authority, be it church or civil, could expect to be revered and their decisions to be obeyed even if there was no personal agreement with the decision in question. But the situation has changed and continues to change. Many would agree that those (...) who exercise authority today have to earn the respect they require to be shown and that they do not always have the unilateral right to make decisions that affect the general community! (shrink)
The pragmatic character of the Dutch book argument makes it unsuitable as an "epistemic" justification for the fundamental probabilist dogma that rational partial beliefs must conform to the axioms of probability. To secure an appropriately epistemic justification for this conclusion, one must explain what it means for a system of partial beliefs to accurately represent the state of the world, and then show that partial beliefs that violate the laws of probability are invariably less accurate than they could be otherwise. (...) The first task can be accomplished once we realize that the accuracy of systems of partial beliefs can be measured on a gradational scale that satisfies a small set of formal constraints, each of which has a sound epistemic motivation. When accuracy is measured in this way it can be shown that any system of degrees of belief that violates the axioms of probability can be replaced by an alternative system that obeys the axioms and yet is more accurate in every possible world. Since epistemically rational agents must strive to hold accurate beliefs, this establishes conformity with the axioms of probability as a norm of epistemic rationality whatever its prudential merits or defects might be. (shrink)
This book defends the view that any adequate account of rational decision making must take a decision maker's beliefs about causal relations into account. The early chapters of the book introduce the non-specialist to the rudiments of expected utility theory. The major technical advance offered by the book is a 'representation theorem' that shows that both causal decision theory and its main rival, Richard Jeffrey's logic of decision, are both instances of a more general conditional decision theory. The book solves (...) a long-standing problem for Jeffrey's theory by showing for the first time how to obtain a unique utility and probability representation for preferences and judgements of comparative likelihood. The book also contains a major new discussion of what it means to suppose that some event occurs or that some proposition is true. The most complete and robust defence of causal decision theory available. (shrink)
It might be expected that it would suffice for the entry for “moral anti-realism” to contain only some links to other entries in this encyclopedia. It could contain a link to “moral realism” and stipulate the negation of the view there described. Alternatively, it could have links to the entries “anti-realism” and “morality” and could stipulate the conjunction of the materials contained therein. The fact that neither of these approaches would be adequate—and, more strikingly, that following the two procedures would (...) yield substantively non-equivalent results—reveals the contentious and unsettled nature of the topic. -/- “Anti-realism,” “non-realism,” and “irrealism” may for most purposes be treated as synonymous. Occasionally, distinctions have been suggested for local pedagogic reasons (see, e.g., Wright 1988a; Dreier 2004), but no such distinction has generally taken hold. (“Quasi-realism” denotes something very different, to be discussed in the supplement Projectivism and quasi-realism below.) All three terms are to be defined in opposition to realism, but since there is no consensus on how “realism” is to be understood, “anti-realism” fares no better. Crispin Wright (1992: 1) comments that “if there ever was a consensus of understanding about ‘realism’, as a philosophical term of art, it has undoubtedly been fragmented by the pressures exerted by the various debates—so much so that a philosopher who asserts that she is a realist about theoretical science, for example, or ethics, has probably, for most philosophical audiences, accomplished little more than to clear her throat.” This entry doesn't purport to do justice to the intricacy and subtlety of the topic of realism; it should be acknowledged at the outset that the fragmentation of which Wright speaks renders it unlikely that the label “moral anti-realism” even succeeds in picking out a definite position. Yet perhaps we can at least make an advance on clearing our throats. (shrink)
In his paper ?The Error in the Error Theory?[this journal, 2008], Stephen Finlay attempts to show that the moral error theorist has not only failed to prove his case, but that the error theory is in fact false. This paper rebuts Finlay's arguments, criticizes his positive theory, and clarifies the error-theoretic position.
Andy Egan has recently produced a set of alleged counterexamples to causal decision theory in which agents are forced to decide among causally unratifiable options, thereby making choices they know they will regret. I show that, far from being counterexamples, CDT gets Egan's cases exactly right. Egan thinks otherwise because he has misapplied CDT by requiring agents to make binding choices before they have processed all available information about the causal consequences of their acts. I elucidate CDT in a way (...) that makes it clear where Egan goes wrong, and which explains why his examples pose no threat to the theory. My approach has similarities to a modification of CDT proposed by Frank Arntzenius, but it differs in the significance that it assigns to potential regrets. I maintain, contrary to Arntzenius, that an agent facing Egan's decisions can rationally choose actions that she knows she will later regret. All rationality demands of agents it that they maximize unconditional causal expected utility from an epistemic perspective that accurately reflects all the available evidence about what their acts are likely to cause. This yields correct answers even in outlandish cases in which one is sure to regret whatever one does. (shrink)
Were I not afraid of appearing too philosophical, I should remind my reader of that famous doctrine, supposed to be fully proved in modern times, “That tastes and colours, and all other sensible qualities, lie not in the bodies, but merely in the senses.” The case is the same with beauty and deformity, virtue and vice. This doctrine, however, takes off no more from the reality of the latter qualities, than from that of the former; nor need it give any (...) umbrage either to critics or moralists. Though colours were allowed to lie only in the eye, would dyers or painters ever be less regarded or esteemed? There is a sufficient uniformity in the senses and feelings of mankind, to make all these qualities the objects of art and reasoning, and to have the greatest influence on life and manners. And as it is certain, that the discovery above-mentioned in natural philosophy, makes no alteration on action and conduct; why should a like discovery in moral philosophy make any alteration? (shrink)
Background: The high disease burden of Africa, the emergence of new diseases and efforts to address the 10/90 gap have led to an unprecedented increase in health research activities in Africa. Consequently, there is an increase in the volume and complexity of protocols that ethics review committees in Africa have to review. Methods: With a grant from the Bill and Melinda Gates Foundation, the African Malaria Network Trust (AMANET) undertook a survey of 31 ethics review committees (ERCs) across sub-Saharan Africa (...) as an initial step to a comprehensive capacity-strengthening programme. The number of members per committee ranged from 3 to 21, with an average of 11. Members of 10 institutional committees were all from the institution where the committees were based, raising prima facie questions as to whether independence and objectivity could be guaranteed in the review work of such committees. Results: The majority of the committees (92%) cited scientific design of clinical trials as the area needing the most attention in terms of training, followed by determination of risks and benefits and monitoring of research. The survey showed that 38% of the ERC members did not receive any form of training. In the light of the increasing complexity and numbers of health research studies being conducted in Africa, this deficit requires immediate attention. Outcome: The survey identified areas of weakness in the operations of ERCs in Africa. Consequently, AMANET is addressing the identified needs and weaknesses through a 4-year capacity-building project. (shrink)
Recently several authors have argued that accuracy-first epistemology ends up licensing problematic epistemic bribes. They charge that it is better, given the accuracy-first approach, to deliberately form one false belief if this will lead to forming many other true beliefs. We argue that this is not a consequence of the accuracy-first view. If one forms one false belief and a number of other true beliefs, then one is committed to many other false propositions, e.g., the conjunction of that false belief (...) with any of the true beliefs. Once we properly account for all the falsehoods that are adopted by the person who takes the bribe, it turns out that the bribe does not increase accuracy. (shrink)
This collection reports on the latest research on an increasingly pivotal issue for evolutionary biology: cooperation. The chapters are written from a variety of disciplinary perspectives and utilize research tools that range from empirical survey to conceptual modeling, reflecting the rich diversity of work in the field. They explore a wide taxonomic range, concentrating on bacteria, social insects, and, especially, humans. -/- Part I (“Agents and Environments”) investigates the connections of social cooperation in social organizations to the conditions that make (...) cooperation profitable and stable, focusing on the interactions of agent, population, and environment. Part II (“Agents and Mechanisms”) focuses on how proximate mechanisms emerge and operate in the evolutionary process and how they shape evolutionary trajectories. Throughout the book, certain themes emerge that demonstrate the ubiquity of questions regarding cooperation in evolutionary biology: the generation and division of the profits of cooperation; transitions in individuality; levels of selection, from gene to organism; and the “human cooperation explosion” that makes our own social behavior particularly puzzling from an evolutionary perspective. (shrink)
What contribution can the empirical sciences make to metaethics? This paper outlines an argument to a particular metaethical conclusion - that moral judgments are epistemically unjustified - that depends in large part on a posteriori premises.
Isaac Levi has long criticized causal decisiontheory on the grounds that it requiresdeliberating agents to make predictions abouttheir own actions. A rational agent cannot, heclaims, see herself as free to choose an actwhile simultaneously making a prediction abouther likelihood of performing it. Levi is wrongon both points. First, nothing in causaldecision theory forces agents to makepredictions about their own acts. Second,Levi's arguments for the ``deliberation crowdsout prediction thesis'' rely on a flawed modelof the measurement of belief. Moreover, theability of agents (...) to adopt beliefs about theirown acts during deliberation is essentialto any plausible account of human agency andfreedom. Though these beliefs play no part inthe rationalization of actions, they arerequired to account for the causalgenesis of behavior. To explain the causes ofactions we must recognize that (a) an agentcannot see herself as entirely free in thematter of A unless she believes herdecision to perform A will cause A,and (b) she cannot come to a deliberatedecision about A unless she adoptsbeliefs about her decisions. FollowingElizabeth Anscombe and David Velleman, I arguethat an agent's beliefs about her own decisionsare self-fulfilling, and that this can beused to explain away the seeming paradoxicalfeatures of act probabilities. (shrink)
Confirmation theory is intended to codify the evidential bearing of observations on hypotheses, characterizing relations of inductive “support” and “countersupport” in full generality. The central task is to understand what it means to say that datum E confirms or supports a hypothesis H when E does not logically entail H.
Taking as its point of departure the work of moral philosopher John Mackie (1917-1981), A World Without Values is a collection of essays on moral skepticism by leading contemporary philosophers, some of whom are sympathetic to Mackie s ...
Richard Jeffrey long held that decision theory should be formulated without recourse to explicitly causal notions. Newcomb problems stand out as putative counterexamples to this ‘evidential’ decision theory. Jeffrey initially sought to defuse Newcomb problems via recourse to the doctrine of ratificationism, but later came to see this as problematic. We will see that Jeffrey’s worries about ratificationism were not compelling, but that valid ratificationist arguments implicitly presuppose causal decision theory. In later work, Jeffrey argued that Newcomb problems are not (...) decisions at all because agents who face them possess so much evidence about correlations between their actions and states of the world that they are unable to regard their deliberate choices as causes of outcomes, and so cannot see themselves as making free choices. Jeffrey’s reasoning goes wrong because it fails to recognize that an agent’s beliefs about her immediately available acts are so closely tied to the immediate causes of these actions that she can create evidence that outweighs any antecedent correlations between acts and states. Once we recognize that deliberating agents are free to believe what they want about their own actions, it will be clear that Newcomb problems are indeed counterexamples to evidential decision theory. (shrink)
Different versions of moral projectivism are delineated: minimal, metaphysical, nihilistic, and noncognitivist. Minimal projectivism (the focus of this paper) is the conjunction of two subtheses: (1) that we experience morality as an objective aspect of the world and (2) that this experience has its origin in an affective attitude (e.g., an emotion) rather than in perceptual faculties. Both are empirical claims and must be tested as such. This paper does not offer ideas on any specific test procedures, but rather undertakes (...) the important preliminary task of clarifying the content of these subtheses (e.g., what is meant by "objective"? what is meant by "experience"?). Finally, attention is given to the relation between (a) acknowledging that the projectivist account might be true of a token moral judgment and (b) maintaining moral projectivism to be true as a general thesis. (shrink)
In his contribution to this volume, Paul Bloomfield analyzes and attempts to answer the question “Why is it bad to be bad?” I too will use this question as my point of departure; in particular I want to approach the matter from the perspective of a moral error theorist. This discussion will preface one of the principal topics of this paper: the relationship between morality and self-interest. Again, my main goal is to clarify what the moral error theorist might say (...) on this subject. Against this background, the final portion of this paper will be a discussion of moral fictionalism, defending it from some objections. (shrink)
The task of this paper is to argue that expressivism [the thesis that moral judgements function to express desires, emotions, or pro/con attitudes] neither implies, nor is implied by, [motivational internalism].
The Evolution of Morality attempts to accomplish two tasks. The first is to clarify and provisionally advocate the thesis that human morality is a distinct adaptation wrought by biological natural selection. The second is to inquire whether this empirical thesis would, if true, have any metaethical implications.
Bayes' Theorem is a simple mathematical formula used for calculating conditional probabilities. It figures prominently in subjectivist or Bayesian approaches to epistemology, statistics, and inductive logic. Subjectivists, who maintain that rational belief is governed by the laws of probability, lean heavily on conditional probabilities in their theories of evidence and their models of empirical learning. Bayes' Theorem is central to these enterprises both because it simplifies the calculation of conditional probabilities and because it clarifies significant features of subjectivist position. Indeed, (...) the Theorem's central insight — that a hypothesis is confirmed by any body of data that its truth renders probable — is the cornerstone of all subjectivist methodology. (shrink)
I argue that one central aspect of the epistemology of causation, the use of causes as evidence for their effects, is largely independent of the metaphysics of causation. In particular, I use the formalism of Bayesian causal graphs to factor the incremental evidential impact of a cause for its effect into a direct cause-to-effect component and a backtracking component. While the “backtracking” evidence that causes provide about earlier events often obscures things, once we our restrict attention to the cause-to-effect component (...) it is true to say promoting (inhibiting) causes raise (lower) the probabilities of their effects. This factoring assumes the same form whether causation is given an interventionist, counterfactual or probabilistic interpretation. Whether we think about causation in terms of interventions and causal graphs, counterfactuals and imaging functions, or probability raising against the background of causally homogenous partitions, if we describe the essential features of a situation correctly then the incremental evidence that a cause provides for its effect in virtue of being its cause will be the same. (shrink)
It is widely believed that the Divine Command Theory is untenable due to the Euthyphro Dilemma. This article first examines the Platonic dialogue of that name, and shows that Socrates’s reasoning is faulty. Second, the dilemma in the form in which many contemporary philosophers accept it is examined in detail, and this reasoning is also shown to be deficient. This is not to say, however, that the Divine Command Theory is true—merely that one popular argument for rejecting it is unsound. (...) Finally, some brief thoughts are presented concerning where the real problems lie for the theory. (shrink)
A growing body of research suggests that students achieve learning outcomes at higher rates when instructors use active-learning methods rather than standard modes of instruction. To investigate how one such method might be used to teach philosophy, we observed two classes that employed Reacting to the Past, an educational role-immersion game. We chose to investigate Reacting because role-immersion games are considered a particularly effective active-learning strategy. Professors who have used Reacting to teach history, interdisciplinary humanities, and political theory agree that (...) it engages students and teaches general skills like collaboration and communication. We investigated whether it can be effective for teaching philosophical content and skills like analyzing, evaluating, crafting, and communicating arguments in addition to bringing the more general benefits of active learning to philosophy classrooms. Overall, we find Reacting to be a useful tool for achieving these ends. While we do not argue that Reacting is uniquely useful for teaching philosophy, we conclude that it is worthy of consideration by philosophers interested in creative active-learning strategies, especially given that it offers a prepackaged set of flexible, user-friendly tools for motivating and engaging students. (shrink)
Facts about the evolutionary origins of morality may have some kind of undermining effect on morality, yet the arguments that advocate this view are varied not only in their strategies but in their conclusions. The most promising such argument is modest: it attempts to shift the burden of proof in the service of an epistemological conclusion. This paper principally focuses on two other debunking arguments. First, I outline the prospects of trying to establish an error theory on genealogical grounds. Second, (...) I discuss how a debunking strategy can work even under the assumption that noncognitivism is true. (shrink)
Colin Radford must weary of defending his thesis that the emotional reactions we have towards fictional characters, events, and states of affairs are irrational.1 Yet, for all the discussion, the issue has not, to my mind, been properly settled—or at least not settled in the manner I should prefer—and so this paper attempts once more to debunk Radford’s defiance of common sense. For some, the question of whether our emotional responses to fiction are rational does not arise, for they are (...) inclined to doubt that we have them at all.2 Emotions, on this view, are fundamentally linked to belief states, as in the following thesis concerning the emotion of fear: 1) We fear for ourselves only if we believe ourselves to be in danger; we fear for others only if we believe they actually exist and are in danger. When we typically engage with fiction we do not ‘suspend our disbelief’, in the sense of coming to believe that the fiction is non-fiction. No matter how engrossed I become in a Dracula movie, I do not begin to believe that I am seeing actual vampires. 2) When we watch a horror movie, we do not believe ourselves, or anyone actual, to be in danger. And so these theorists, endorsing (1) and (2), are obliged to deny the intuitive (3): 3) We are sometimes frightened when watching a horror movie. These three propositions are a version of what is sometimes called ‘The Paradox of Fiction’. For my money, since the denial of (2) is foolish, and the denial of (3) deeply counterintuitive, it is (1)—being a substantive philosophical thesis—that is most likely the culprit. Radford agrees, yet maintains that there is some intimate connection between belief and emotion. For him, the dependence is not the existential one stated in (1), but a normative one: we do not rationally feel fear unless we believe ourselves (or someone actual) to be in danger.3 This revision of the connection allows the construction of a quite different inconsistent triad: 4) We are not rationally frightened unless we believe someone actual to be in danger.. (shrink)
Colin Howson has recently argued that accuracy arguments for probabilism fail because they assume a privileged ‘coding’ in which TRUE is assigned the value 1 and FALSE is assigned the value 0. I explain why this is wrong by first showing that Howson’s objections are based on a misconception about the way in which degrees of confidence are measured, and then reformulating the accuracy argument in a way that manifestly does not depend on the coding of truth-values. Along the way, (...) I will explain how to formulate the laws of probability and rational expectation in a scale-invariant way, and how to properly understand the values of the credence functions that we use to represent rational degrees of confidence. (shrink)
The United States considers educating all students to a threshold of adequate outcomes to be a central goal of educational justice. The No Child Left Behind Act introduced evidence-based policy and accountability protocols to ensure that all students receive an education that enables them to meet adequacy standards. Unfortunately, evidence-based policy has been less effective than expected. This article pinpoints under-examined methodological problems and suggests a more effective way to incorporate educational research findings into local evidence-based policy decisions. It identifies (...) some things educators need to know and do to determine whether available interventions can play the right casual role in their setting to produce desired effects. It examines the value and limits of educational research, especially randomized controlled trials, for this task. (shrink)