The popularization of neuroscientific ideas about learning—sometimes legitimate, sometimes merely commercial—poses a real challenge for classroom teachers who want to understand how children learn. Until teacher preparation programs are reconceived to incorporate relevant research from the neuro- and cognitive sciences, teachers need translation and guidance to effectively use information about the brain and cognition. Absent such guidance, teachers, schools, and school districts may waste time and money pursuing so called brain-based interventions that lack a firm basis in research. Meanwhile, the (...) success of our schools will continue to be narrowly defined by achievement standards that ignore knowledge of the neural and cognitive processes of learning. To achieve the goals of neuroeducation, its proponents must address unique ethical issues that neuroeducation raises for five different groups of individuals: a) practicing teachers, b) neuroscience researchers whose work could inform education, c) publishers and the popular media, d) educational policy-makers, and e) university-level teacher educators. We suggest ways in which these ethical challenges can be met and provide a model for teacher preparation that will enable teachers themselves to translate findings from the neuro-and cognitive sciences and use legitimate research to inform how they design and deliver effective instruction. (shrink)
Software piracy is a damaging and important moral issue, which is widely believed to be unchecked in particular areas of the globe. This cross-cultural study examines differences in morality and behavior toward software piracy in Singapore versus the United States, and reviews the cultural histories of Asia versus the United States to explore why these differences occur. The paper is based upon pilot data collected in the U.S. and Singapore, using a tradeoff analysis methodology and analysis. The data reveal some (...) fascinating interactions between the level of ethical transgression and the rewards or consequences which they produce. (shrink)
This article examines the ethnographic case study in education in the context of policy making with particular emphasis on the practice of research and policy making. The central claim of the article is that it is impossible to establish a transcendental epistemology of the case study on instrumental rationality. Instead it argues for the notion of situated judgement that needs to be made by practitioners in context, practitioners being both researchers and policy makers. In other words, questions about the level (...) of confidence or warrant that can be placed in different sorts of research evidence and findings cannot be answered independently of forming a view about the appropriateness of the policy culture that shapes political decision-making. The article draws a distinction between the general, which is internal to the data as construed by a particular discipline, and the universal, which is the result of embedded human deliberation. This applies to all research findings and not only to case study, although since case study has long had to defend itself against accusations of the lack of generality, it can be a useful starting point for the discussion. This article is not meant to be yet another defence of the case study research genre, although a summary of other defences is offered. Rather it focuses on how use of the case study points to the limits of epistemology as rationality and offers a view of epistemology as ethics. (shrink)
I analyze the “Sportsman’s Code,” arguing that several of its rules presuppose a respect for animals that renders hunting a prima facie wrong. I summarize the main arguments used to justify hunting and consider them in relation to the prima facie case against hunting entailed by the sportsman’s code. Sport hunters, I argue, are in a paradoxical position—the more conscientiously they follow the code, themore strongly their behavior exemplifies a respect for animals that undermines the possibilities of justifying hunting altogether. (...) I consider several responses, including embracing the paradox, renouncing the code, and renouncing hunting. (shrink)
Kenney, Mark Review(s) of: A source critical edition of the gospels of Matthew and Luke in Greek and English, 2 vols., Christopher J. Monaghan, C.P., Rome: Gregorian and Biblical Press, 2010, pp.378, 45.00.
This edited collection had its origins in a two-day conference held at the Tate Britain, organised collaboratively by research staff and students at Middlesex University and the London Consortium in order to celebrate the 250th Anniversary of the publication of Edmund Burke's famous book on the sublime. The conference was funded by Middlesex University, the London Consortium and the Tate Britain's AHRC-funded "Sublime Object: Nature, Art and Language" research project. The conference set out to critically examine the legacy of the (...) sublime in contemporary art, culture and society and to assess the value and the dangers of this concept as it is articulated in current thought and practice. The book selected from and expanded on the papers delivered at the conference in order to pursue this goal further. It was broken into themed sections (each of which had an introduction), each exploring an different issue around contemporary uses of the sublime. The sections were: 1. Nature, Ecology and the Sublime; 2. The Sublime After Kant; 3. Capitalism, Terror, Art and the Sublime; 4. Baroque and Beyond: Art, Sex and the Sublime; 5. The Cinematic Sublime. The volume reflects the interdisiplinarity of the concept of the sublime today, and includes essays whose appraoches come from aesthetics and ethics, ecological and political thought, psychoanalysis, feminism, film studies, literary studies, art history and popular culture. It includes papers by internationally renowned authors from the UK, America and Europe alongside the new voices of younger academics. The contributors were: Jane Bennett (Johns Hopkins University), Mark Bould (University of the West of England), Eu Jin Chua (London Consortium), Gudrun Filipska (Middlesex University), Cornelia Klinger (Institute for Human Sciences, Vienna / University of Tübingen, Germany), Esther Leslie (Birkbeck), William McDonald (Middlesex Univeristy), Laura Mulvey (Birkbeck), Claire Pajaczkowska (Royal College of Art), Griselda Pollock (University of Leeds), Gene Ray (Geneva University of Art and Design), Bettina Reiber (Central St. Martins), Jan Rosiek (University of Copenhagen), Sherryl Vint (Brock University, Canada), and Luke White (Middlesex University). (shrink)
Best known for his theories of ideology and its impact on politics and culture Louis Althusser revolutionized Marxist theory. His writing changed the face of literary and cultural studies and continues to influence political modes of criticism such as feminism, postcolonialism and queer theory. Beginning with an introduction to the crucial context of Marxist theory, this book goes on to explain: - How Althusser interpreted and developed Marx's work - The political implications of reading - Ideology and its significance for (...) culture and criticism - Althusser's aesthetic criticism of literature, theatre and art Placing Althusser's key ideas in the context of earlier Marxist thought, as well as tracing their development and impact, Luke Ferretter provides a wide-ranging yet accessible guide, ideal for those new to the work of this influential critical thinker. (shrink)
Many have thought that there is a problem with causal commerce between immaterial souls and material bodies. In Physicalism or Something Near Enough, Jaegwon Kim attempts to spell out that problem. Rather than merely posing a question or raising a mystery for defenders of substance dualism to answer or address, he offers a compelling argument for the conclusion that immaterial souls cannot causally interact with material bodies. We offer a reconstruction of that argument that hinges on two premises: Kim’s Dictum (...) and the Nowhere Man principle. Kim’s Dictum says that causation requires a spatial relation. Nowhere Man says that souls can’t be in space. By our lights, both premises can be called into question. We’ll begin our evaluation of the argument by pointing out some consequences of Kim’s Dictum. For some, these will be costs. We will then present two defeaters for Kim’s Dictum and a critical analysis of Kim’s case for Nowhere Man. The upshot is that Kim’s argument against substance dualism fails. (shrink)
I examine different strategies involved in stating anti-theistic arguments from natural evil, and consider some theistic replies. There are, traditionally, two main types of arguments from natural evil: those that purport to deduce a contradiction between the existence of natural evil and the existence of God, and those that claim that the existence of certain types or quantities of natural evil significantly lowers the probability that theism is true. After considering peripheral replies, I state four prominent theistic rebutting strategies: skeptical (...) theism; Richard Swinburne's view that moral knowledge entails natural evil; the soul-making theodicy; and the natural law theodicy. (shrink)
I argue that there are non-trivial objective chances (that is, objective chances other than 0 and 1) even in deterministic worlds. The argument is straightforward. I observe that there are probabilistic special scientific laws even in deterministic worlds. These laws project non-trivial probabilities for the events that they concern. And these probabilities play the chance role and so should be regarded as chances as opposed, for example, to epistemic probabilities or credences. The supposition of non-trivial deterministic chances might seem to (...) land us in contradiction. The fundamental laws of deterministic worlds project trivial probabilities for the very same events that are assigned non-trivial probabilities by the special scientific laws. I argue that any appearance of tension is dissolved by recognition of the level-relativity of chances. There is therefore no obstacle to accepting non-trivial chance-role-playing deterministic probabilities as genuine chances. (shrink)
Consider people’s ordinary concept of belief. This concept seems to pick out a particular psychological state. Indeed, one natural view would be that the concept of belief works much like the concepts one finds in cognitive science – not quite as rigorous or precise, perhaps, but still the same basic type of notion. But now suppose we turn to other concepts that people ordinarily use to understand the mind. Suppose we consider the concept happiness. Or the concept love. How are (...) these concepts to be understood? One obvious hypothesis would be that they are best understood as being more or less like the concept of belief. Maybe these concepts, too, pick out a particular mental state and thereby enable people to predict, explain and understand others’ behavior. We will argue that this hypothesis is mistaken. Instead, we suggest that the different concepts people use to understand the mind are fundamentally different from each other. Some of these concepts do indeed serve simply to pick out a particular mental state, but others allow a role for evaluative judgments. So, for example, our claim will be that when people are wondering whether a given agent is truly ‘happy’ or ‘in love,’ they are not merely trying to figure out whether this agent has a particular sort of mental state. They are also concerned in a central way with evaluating the agent herself. In short, our aim is to point to a striking sort of difference between the different concepts that people use to pick out psychological attitudes. We will be.. (shrink)
Moral principles play important roles in diverse areas of moral thought, practice, and theory. Many who think of themselves as ‘moral generalists’ believe that moral principles can play these roles—that they are capable of doing so. Moral generalism maintains that moral principles can and do play these roles because true moral principles are statements of general moral fact (i.e. statements of facts about the moral attributes of kinds of actions, kinds of states of affairs, etc.) and because general moral facts (...) explain particular moral facts (i.e. facts about the moral attributes of particulars). Moral holism maintains that what is a moral reason to in one case may not be one in another, and may even be a moral reason not to given suitable circumstances. Some ‘moral particularists’ maintain that moral holism motivates scepticism about the existence of and need for moral principles, along with scepticism about the viability of principle-based approaches to ethics and moral theory. But I argue that moral holism is itself a form of moral generalism, one that takes facts about the right- and wrong-making powers of (generic) moral factors to explain certain particular moral facts—namely, the rightness and wrongness of particular actions. I also argue that a moral-theoretic version of dispositionalism—the view that dispositions, powers, or capacities are the fundamental units of explanation—explains both why moral holism is true and why moral generalism is true. (shrink)
Adam Morton, Stephen de Wijze, Hillel Steiner, and Eve Garrard have defended the view that evil action is qualitatively distinct from ordinary wrongdoing. By this, they do not that mean that evil actions feel different to ordinary wrongs, but that they have motives or effects that are not possessed to any degree by ordinary wrongs. Despite their professed intentions, Morton and de Wijze both offer accounts of evil action that fail to identify a clear qualitative difference between evil and ordinary (...) wrongdoing. In contrast, both Steiner's and Garrard's accounts of evil do point to qualitative distinctions between kinds of action, but it is implausible that either account correctly characterizes evil. The most plausible accounts maintain that evil actions have a necessary connection to extreme harms, and this suggests that evil is not qualitatively distinct from ordinary wrongdoing. (shrink)
I consider two views that combine different elements of general theistic replies to natural evil, those of Peter van Inwagen and William Hasker. I end with a Hasker-style defense – one that, unlike Hasker's, denies the existence of pointless natural evils – and some brief observations on the direction of future debate.
The conceptions of jealousy used by philosophical writers are various, and, this paper suggests, largely inadequate. In particular, the difference between jealousy and envy has not yet been plausibly specified. This paper surveys some past analyses of this distinction and addresses problems with them, before proposing its own positive account of jealousy, developed from an idea of Leila Tov-Ruach(a.k.a. A. O. Rorty). Three conditions for being jealous are proposed and it is shownhow each of them helps to tell the emotion (...) apart from some distinct species of envy.It is acknowledged that the referents of the two terms are, to some extent, overlapping,but shown how this overlap is justified by the psychologies of the respective emotions. (shrink)
It is intuitively plausible that not every evildoer is an evil person. In order to make sense of this intuition we need to construct an account of evil personhood in addition to an account of evil action. Some philosophers have offered aggregative accounts of evil personhood, but these do not fit well with common intuitions about the explanatory power of evil personhood, the possibility of moral reform, and the relationship between evil and luck. In contrast, a dispositional account of evil (...) personhood can allow that evil is explanatory, that an evil person can become good, and that luck might prevent evil persons from doing evil or cause non-evil persons to do evil. Yet the dispositional account of evil personhood implies that some evil persons are blameless, which seems to clash with the intuition that evil persons deserve our strongest moral condemnation. Moreover, since it is likely that a large proportion of us are disposed to perform evil actions in some environments, the dispositional account threatens to label a large proportion of people evil. In this paper I consider a range of possible modifications to the dispositional account that might bring it more closely into alignment with our intuitions about moral condemnation and the rarity of evil persons. According to the most plausible of these theories, S is an evil person if S is strongly disposed to perform evil actions when in conditions that favour S’s autonomy. (shrink)
I address a question in moral metaphysics: How are conflicts between moral obligations possible? I begin by explaining why we cannot give a satisfactory answer to this question simply by positing that such conflicts are conflicts between rules, principles, or reasons. I then develop and defend the “Dispositional Account,” which posits that conflicts between moral obligations are conflicts between the manifestations of obligating dispositions (obligating powers, capacities, etc.), just as conflicts between physical forces are conflicts between the manifestations of (certain) (...) causal dispositions (causal powers, capacities, etc.). This account combines the so-called “moral forces” interpretation of prima facie obligations with a dispositional moral metaphysic according to which the metaphysical grounds of moral obligations are not rules or laws, but rather real, irreducibly dispositional properties (or powers) of moral agents and patients. My principal aims are to offer a theoretically attractive and suitably metaphysical account of conflicts of obligation, and to show that the dispositional moral metaphysic that grounds the Dispositional Account can explain and accommodate plausible normative views that rule- and law-based alternatives cannot, as well as to answer objections that have been pressed against other accounts of moral conflict (especially Ross’s) that appeal to moral dispositions or forces. (shrink)
Is evil a distinct moral concept? Or are evil actions just very wrong actions? Some philosophers have argued that evil is a distinct moral concept. These philosophers argue that evil is qualitatively distinct from ordinary wrongdoing. Other philosophers have suggested that evil is only quantitatively distinct from ordinary wrongdoing. On this view, evil is just very wrong. In this paper I argue that evil is qualitatively distinct from ordinary wrongdoing. The first part of the paper is critical. I argue that (...)Luke Russell’s attempt to show that evil is only quantitatively distinct from ordinary wrongdoing fails. Russell’s argument fails because it is based on an implausible criterion for determining whether two concepts are qualitatively distinct. I offer a more plausible criterion and argue that based on this criterion evil and wrongdoing are qualitatively distinct. To help make my case, I sketch a theory of evil which makes a genuinely qualitative distinction between evil and wrongdoing. I argue that we cannot characterize evil as just very wrong on plausible conceptions of evil and wrongdoing. I focus on act-consequentialist, Kantian, and contractarian conceptions of wrongdoing. (shrink)
What are moral principles? In particular, what are moral principles of the sort that (if they exist) ground moral obligations or—at the very least—particular moral truths? I argue that we can fruitfully conceive of such principles as real, irreducibly dispositional properties of individual persons (agents and patients) that are responsible for and thereby explain the moral properties of (e.g.) agents and actions. Such moral dispositions (or moral powers) are apt to be the metaphysical grounds of moral obligations and of particular (...) truths about what is morally permissible, impermissible, etc. Moreover, they can do other things that moral principles are supposed to do: explain the phenomena “falling within their scope,” support counterfactuals, and ground moral necessities , “necessary connections” between obligating reasons and obligations. And they are apt to be the truthmakers for moral laws, or “lawlike” moral generalizations. (shrink)
The dominant approach to analyzing the meaning of natural language sentences that express mathematical knowl- edge relies on a referential, formal semantics. Below, I discuss an argument against this approach and in favour of an internalist, conceptual, intensional alternative. The proposed shift in analytic method offers several benefits, including a novel perspective on what is required to track mathematical content, and hence on the Benacerraf dilemma. The new perspective also promises to facilitate discussion between philosophers of mathematics and cognitive scientists (...) working on topics of common interest. (shrink)
Moral obligations rest on circumstances (events, states of affairs, etc.). But what are these obligating reasons and in virtue of what are they such reasons? Nomological conceptions define such reasons in terms of moral laws. I argue that one such conception cannot be correct and that others do not support the familiar and plausible view that obligating reasons are pro tanto (or contributory) reasons, either because they entail that this view is false or else because they cannot explain—or even help (...) to explain—how it could be true. I also argue that a particular dispositional conception of obligating reasons does support this view of obligating reasons by enabling an explanation of how it could be true. Moreover, my arguments show that the dispositional moral metaphysic on which this conception is predicted can do something that nomological alternatives cannot: explain why obligating reasons and moral obligations are pro tanto reasons and obligations. (shrink)
The starting point in the development of probabilistic analyses of token causation has usually been the naïve intuition that, in some relevant sense, a cause raises the probability of its effect. But there are well-known examples both of non-probability-raising causation and of probability-raising non-causation. Sophisticated extant probabilistic analyses treat many such cases correctly, but only at the cost of excluding the possibilities of direct non-probability-raising causation, failures of causal transitivity, action-at-a-distance, prevention, and causation by absence and omission. I show that (...) an examination of the structure of these problem cases suggests a different treatment, one which avoids the costs of extant probabilistic analyses. (shrink)
Psychologism is the attempt to account for the necessary truths of mathematics in terms of contingent psychological facts. It is widely regarded as a fallacy. Jackendoff's view of reference and truth entails psychologism. Therefore, he needs to either provide a defense of the doctrine, or show that the charge doesn't apply.
In his book The Myth of Evil , Phillip Cole claims that the concept of evil divides normal people from inhuman, demonic and monstrous wrongdoers. Such monsters are found in fiction, Cole maintains, but not in reality. Thus, even if the concept of evil has the requisite form to be explanatorily useful, it will be of no explanatory use in the real world. My aims in this paper are to assess Cole’s arguments for the claim that there are no actual (...) evil persons, and, in so doing, to develop a clearer framework in which to think about evil personhood. While Cole is right to claim that there are no actual evil monsters or supernatural demons, he underestimates the extent to which ascriptions of demonic monstrosity are figurative rather than literal. Hence, a lack of actual monsters does not imply a lack of actual evil persons. More plausibly, Cole suggests that the concept of evil implies an unrealistically dualistic worldview, with purely evil people on one side and ordinary people on the other. Since no one is purely bad, Cole claims, the concept of evil fails to refer to actual persons. Cole is wrong to think that the use of extreme moral concepts is incompatible with fine-grained moral evaluations across a broad spectrum between the extremes. Nor is Cole sufficiently careful in unpacking the various ways in which a person might be considered purely bad. I will argue that some actual persons are extremely bad, that it is very likely that some actual persons are fixedly bad, and that quite possibly no actual persons are thoroughly bad or innately bad. It is plausible that a person is evil only if he is extremely and fixedly bad, but Cole is wrong to suppose that a person is evil only if he is thoroughly and innately bad. Thus, even if we accept Cole’s claim that no actual person is thoroughly or innately bad, it still seems very likely that some actual persons are evil, and hence that evil can be an explanatorily useful concept. (shrink)
The degree of realism that Duns Scotus understood his formal distinction to have implied is a matter of dispute going back to the fourteenth century. Both modern and medieval commentators alike have seen Scotus's later, Parisian treament of the formal distinction as less realist in the sense that it would deny any extra-mentally separate formalities or realities. This less realist reading depends in large part on a question known to scholars only in the highly corrupt edition of Luke Wadding, (...) where it is printed as the first of the otherwise spurious Quaestiones miscellaneae de formalitatibus. The present study examines this question in detail. Cited by Scotus's contemporaries as the Quaestio logica Scoti, we establish that it was a special disputation held by Scotus at Paris in response to criticisms of his use of the formal distinction in God, identify its known manuscripts, and provide an analysis based upon a corrected text, showing in particular the total unreliability of the Wadding edition. Our analysis shows that the Logica Scoti does not absolutely prohibit an assertion of formalities as correlates of the formal distinction, even in the divine Person, so long as their non-identity is properly qualified. That is, the positing of formalities does not of itself entail an unqualified or absolute distinction. (shrink)
What are moral principles? The assumption underlying much of the generalism–particularism debate in ethics is that they are (or would be) moral laws: generalizations or some special class thereof, such as explanatory or counterfactual-supporting generalizations. I argue that this law conception of moral principles is mistaken. For moral principles do at least three things that moral laws cannot do, at least not in their own right: explain certain phenomena, provide particular kinds of support for counterfactuals, and ground moral necessities, “necessary (...) connections” between obligating reasons and obligations. Moreover, neither a best-systems theory of moral principles nor any of the competing theories of moral principles proposed by Sean McKeever and Michael Ridge, Pekka Väyrynen, and Mark Lance and Margaret Little could vindicate the law conception of moral principles. I conclude with some brief remarks about what moral principles might be if they are not moral laws. (shrink)
An influential tradition in the philosophy of causation has it that all token causal facts are, or are reducible to, facts about difference-making. Challenges to this tradition have typically focused on pre-emption cases, in which a cause apparently fails to make a difference to its effect. However, a novel challenge to the difference-making approach has recently been issued by Alyssa Ney. Ney defends causal foundationalism, which she characterizes as the thesis that facts about difference-making depend upon facts about physical causation. (...) She takes this to imply that causation is not fundamentally a matter of difference-making. In this paper, I defend the difference-making approach against Ney’s argument. I also offer some positive reasons for thinking, pace Ney, that causation is fundamentally a matter of difference-making. (shrink)
In this book, Mumford and Anjum advance a theory of causation based on a metaphysics of powers. The book is for the most part lucidly written, and contains some interesting contributions: in particular on the (lack of) necessary connection between cause and effect and on the perceivability of the causal relation. I do, however, have reservations about some of the book’s central theses: in particular, that cause and effect are simultaneous, and that causes can fruitfully be represented as vectors.
It is often said that an argument is valid if and only if it is impossible for its premises to be jointly true and its conclusion false. Usually there is little harm in saying this but it places the concept of truth at the very heart of logic and, given how complex and obscure that concept is, one might wonder if trouble arises from this.It does — in at least two contexts. One of these was explored in the first half (...) of the fourteenth century by Jean Buridan and by the mysterious figure known as the Pseudo-Scotus of the Questions on the Prior Analytics printed in the edition of Scotus's works edited by Luke Wadding. Buridan thought that the bearers of truth were particular sentence-tokens; he thought of truth as a .. (shrink)
Though almost forty years have elapsed since its first publication, it is a testament to the philosophical acumen of its author that 'The Matter of Chance' contains much that is of continued interest to the philosopher of science. Mellor advances a sophisticated propensity theory of chance, arguing that this theory makes better sense than its rivals (in particular subjectivist, frequentist, logical and classical theories) of ‘what professional usage shows to be thought true of chance’ (p. xi) – in particular ‘that (...) chance is objective, empirical and not relational, and that it applies to the single case’ (ibid.). The book is short and dense, with the serious philosophical content delivered thick and fast. There is little by way of road-mapping or summarising to assist the reader: the introduction is hardly expansive and the concluding paragraph positively perfunctory. The result is that the book is often difficult going, and the reader is made to work hard to ensure correct understanding of the views expressed. On the other hand, the author’s avoidance of unnecessary use of formalism and jargon ensures that the book is still reasonably accessible. In the following, I shall first summarise the key features of Mellor’s propensity theory, and then offer a few critical remarks. (shrink)
We investigate whether standard counterfactual analyses of causation (CACs) imply that the outcomes of space-like separated measurements on entangled particles are causally related. While it has sometimes been claimed that standard CACs imply such a causal relation, we argue that a careful examination of David Lewis's influential counterfactual semantics casts doubt upon this. We discuss ways in which Lewis's semantics and standard CACs might be extended to the case of space-like correlations.
In this journal, Luke Russell defends a sophisticated dispositional account of evil personhood according to which a person is evil just in case she is strongly and highly fixedly disposed to perform evil actions in conditions that favour her autonomy. While I am generally sympathetic with this account, I argue that Russell wrongly dismisses the mirror thesis—roughly, the thesis that evil people are the mirror images of the morally best sort of persons—which I have defended elsewhere. Russell’s rejection of (...) the mirror thesis depends upon an independently implausible account of moral sainthood, one that is implausible for reasons that Russell himself suggests in another context. Indeed, an account of moral sainthood that parallels Russell’s account of evil personhood is plausible for the same reasons that his account of evil personhood is, and that suggests that Russell himself is actually committed to the mirror thesis. (shrink)