Oregon is the only state in the United States where a physician may legally prescribe a lethal dose of barbiturate for a patient intending suicide. The Oregon Death with Dignity Act was passed by voters in 1994 and came into effect after much legal wrangling in October of 1997. At the same time, a cabinetmaker named Pat Matheny was struggling with progressive weakness from amyotrophic lateral sclerosis, or ALS. I met with Pat and his family for a lengthy interview in (...) October 1998 in Coos Bay, Oregon, for a television news report on his decision to get a lethal prescription. Below is an extract from that interview. On the day this introduction was written, 10 March 1999, Pat took the prescribed lethal overdose of barbiturates and died at home. His illness was taking his voice, he could not move his hands or legs, and breathing was becoming very difficult. His mother told me he knew that was for him. (shrink)
Seeking to expand on previous theories, this paper explores the AIR (Applying Intelligence to the Reflexes) approach to expert performance previously outlined by Geeves, Christensen, Sutton and McIlwain (2008). Data gathered from a semi-structured interview investigating the performance experience of Jeremy Kelshaw (JK), a professional musician, is explored. Although JK’s experience of music performance contains inherently uncertain elements, his phenomenological description of an ideal performance is tied to notions of vibe, connection and environment. The dynamic nature of music performance (...) advocated by the AIR approach is illustrated by the strategies that JK implements during performance. Through executing these strategies, JK attempts to increase the likelihood of vibe and connection by selectively exercising agency over performance variables within his control. In order to achieve this, JK must engage in ongoing monitoring of his performance, whereby the spotlight of his attention pans across a vast array of disparate performance processes (and levels within these processes) in order to ascertain how he can most effectively meet the specific demands of a given performance situation. It is hoped that future research compiling data from numerous interviews and sources as well as using different research methodologies will further unlock the potential that the AIR approach holds for understanding expert performance. (shrink)
[W. J. T.] Mitchell focuses on the exemplary status of the Wall of Fame in Sal’s Pizzeria, “an array of signed publicity photos of Italian-American stars in sports, movies, and popular music” . He argues that the Wall “exemplifies the central contradictions of public art” . “The Wall,” he writes, “is important to Sal not just because it displays famous Italians but because they are famous Americans … who have made it possible for Italians to think of themselves as Americans, (...) full-fledged members of the general public sphere” . For Buggin’ Out, the young black customer who angrily objects to the absence of photos of black people, the Wall “signifies exclusion from the public sphere” . Although the streets are saturated with images of “African-American heroes,” those “tokens of self-respect” are not enough for Buggin’ Out, who wants “the respect of whites, the acknowledgment that African-Americans are hyphenated Americans, too, just like Italians” . Mitchell astutely interprets the desired integration of the Wall as merely a symptom of a larger struggle for “full economic participation. As long as blacks do not own private property in this society,” he states, “they remain in something like the status of public art, mere ornaments to the public place, entertaining statues and abstract caricatures rather than full human beings” . By foregrounding the economic implications of the film, Mitchell has surely engaged one of the dominant goals of the man who formed Forty Acres and a Mule Productions and who recently opened the store called Spike’s Joint in New York City. Yet Mitchell’s sympathetic account belies the countercurrents that trouble the ostensible progressiveness of Spike Lee’s ambitious art. Jerome Christensen teaches in the English department at Johns Hopkins University. He is the author of books on Coleridge and Hume and one forthcoming on Byron. Currently, he is completing a study of the continued pertinence of the romantic turn of mind called Romantic Theory, Romantic Practice. (shrink)
If you are anything like me, you may feel yourself unsure of what, as a critic these days, you ought to be talking about—whether literature qua literature, literature as rhetoric, literature as politics or as history, whether about the persistence of romanticism or the waxing of postmodernism, the decline of Yale or the rise of Duke. If, like me, you are puzzled by what we now ought to be about, you may also be like Paul de Man, who bespoke a (...) similar concern: “In a manner that is more acute for theoreticians of literature than for theoreticians of the natural or the social world, it can be said that they do not quite know what it is they are talking about, … that, whenever one is supposed to speak of literature, one speaks of anything under the sun except literature. The need for determination,” de Man concludes, “thus becomes all the stronger as a way to safeguard a discipline which constantly threatens to degenerate into gossip, trivia or self-obsession,”1De Man’s wishes are rarely fulfilled, and this instance is no exception. Despite the critic’s determinations, theory, it turns out, is the story of the failure of safeguards to do the job for which they are designed. There is no better instance of that ironic truth than the career of Paul de Man. No critic has fallen farther despite his determination; from a paragon of analytical rigor, he has become the most gossiped about critic of the late 1980s 1. Paul de Man, The Resistance to Theory , p. 29; hereafter abbreviated RT. Jerome Christensen teaches English at The Johns Hopkins University. He is the author of Coleridge’s Blessed Machine of language and Practicing Enlightenment: Hume and the Formation of a Literary Career . This essay is part of a work in progress entitled Prefigurations: Romantic Theory and Romantic Practice. (shrink)
In his recent book Criticism and Social Change Frank Lentricchia melodramatically pits his critical hero Kenneth Burke, advocate of the intellect’s intervention in social life, against the villainous Paul de Man, “undisputed master in the United States of what is called deconstruction.” Lentricchia charges that “the insidious effect of [de Man’s] work is not the proliferating replication of his way of reading … but the paralysis of praxis itself: an effect that traditionalism, with its liberal view of the division of (...) culture and political power, should only applaud.”1 He goes on to prophesy thatThe deconstruction of deconstruction will reveal, against apparent intention, a tacit political agenda after all, one that can only embarrass deconstruction, particularly its younger proponents whose activist experiences within the socially wrenching upheavals of the 1960s and early 1970s will surely not permit them easily to relax, without guilt and self-hatred, into resignation and ivory tower despair. [CSC, p. 40]Such is Lentricchia’s strenuous conjuration of a historical moment in which he can forcefully intervene—a summons fraught with the pathos excited by any reference to the heady days of political enthusiasm during the war in Vietnam. Lentricchia ominously figures a scene of rueful solitude where de Manian lucidity breaks into the big chill. And maybe it will. But Lentricchia furnishes no good reason why it should. De Manian deconstruction is “deconstructed” by Lentricchia to reveal “against apparent intention, a tacit political agenda.” And this revelation is advertised as a sure embarrassment to the younger practitioners of deconstruction—sweepingly characterized as erstwhile political activists who have, wide-eyed, opted for a critical approach that magically entangles its proponents in the soul-destroying delights of rhetoric and reaction. Left unexamined in Lentricchia’s story, however, is the basis for the initial rapport between radicalism and deconstruction. Why should collegiate activists have turned into deconstructionsists? Is not that, in Lentricchia’s terms, the same question as asking why political activists should have turned to literary criticism at all? If we suppose this original turn to be intentional, how could the initiates of this critical approach ever be genuinely betrayed into embarrassment by time or by its herald, Frank Lentricchia? On the face of it, the traducement of a secret intention would be unlikely to come as a surprise, since deconstructing deconstruction is not only the enterprise of Marxist critics like Lentricchia but also of Jacques Derrida, archdeconstructor, who unashamedly identified the embarrassment of intention as constitutive of the deconstructive method. If deconstruction is at once a natural outlet for activists and the first step on a slippery slope that ends in apostasy , it suggests a phenomenon with contours more suggestively intricate, if not less diabolically seductive, than the program Lentricchia outlines. And it is a phenomenon as worrisomely affiliative as it is bafflingly intricate. We need to know whether the relations between deconstruction and radical politics, between deconstruction and apostasy between deconstruction and criticism, and between apostasy and criticism are necessary or contingent, or neither and both at once. 1. Frank Lentricchia, Criticism and Social Change , p. 38; all further references to this work, abbreviated CSC, will be included in the text. Jerome Christensen, professor of English at the Johns Hopkins University, is the author of Coleridge’s Blessed Machine of Language and the forthcoming Hume’s Practice: The Career of an Enlightenment Man of Letters. He is currently at work on a study of Byron and the issue of strong romanticism. (shrink)
The ancient Stoics repeatedly stressed the monolithic comprehensiveness of their philosophy, and this book is the only one to provide a holistic grasp of their attempt to synthesize the whole of the human condition into a unified view. Originally published in 1962, _An Essay on the Unity of Stoic Philosophy_ was far ahead of its time. Now a pivotal text, it lays out the core ideas of Stoicism and their interconnection against the backdrop of Aristotelian philosophy, providing a coherent understanding (...) of the many—and sometimes divergent—philosophies the Stoics formulated. At once penetrating and lucid, Johnny Christensen’s book is brought back into print in a second edition for a new audience. (shrink)
In this highly original study, Jerome Christensen reconstructs the career of a representative Enlightenment man of letters, David Hume. In doing so, Christensen develops a prototype for a post-structuralist biography. Christensen motivates the interplay between Hume’s texts as arguments and as symbolic acts by conceiving of Hume’s literary career as an adaptive discursive practice, the projected and performed narrative of his social life. Students and scholars of eighteenth-century English and French literature, feminist studies, political theory and history, (...) philosophy, and intellectual history will welcome this unprecedented and challenging view of David Hume and his times. (shrink)
Revenge has been a subject of concern in most intellectual traditions throughout history, and even when social norms regard it as permissible or even obligatory, it is commonly recognised as being more counterproductive than beneficial. In this book, Kit R. Christensen explores this provocative issue, offering an in-depth account of both the nature of revenge and the causes and consequences of the desire for this kind of retaliatory violence. He then develops a version of eudaimonistic consequentialism to argue that (...) vengeance is never morally justified, and applies this to cases of intergroup violence where the lust for revenge against a vilified 'Them' is easily incited and often exploited. His study will interest a wide range of readers in moral philosophy as well as social philosophers, legal theorists, and social/behavioural scientists. (shrink)
Sometimes we get evidence of our own epistemic malfunction. This can come from finding out we’re fatigued, or have been drugged, or that other competent and well-informed thinkers disagree with our beliefs. This sort of evidence seems to seems to behave differently from ordinary evidence about the world. In particular, getting such evidence can put agents in a position where the most rational response involves violating some epistemic ideal.
Responding rationally to the information that others disagree with one’s beliefs requires assessing the epistemic credentials of the opposing beliefs. Conciliatory accounts of disagreement flow in part from holding that these assessments must be independent from one’s own initial reasoning on the disputed matter. I argue that this claim, properly understood, does not have the untoward consequences some have worried about. Moreover, some of the difficulties it does engender must be faced by many less conciliatory accounts of disagreement.
What role, if any, does formal logic play in characterizing epistemically rational belief? Traditionally, belief is seen in a binary way - either one believes a proposition, or one doesn't. Given this picture, it is attractive to impose certain deductive constraints on rational belief: that one's beliefs be logically consistent, and that one believe the logical consequences of one's beliefs. A less popular picture sees belief as a graded phenomenon.
How much should your confidence in your beliefs be shaken when you learn that others – perhaps 'epistemic peers' who seem as well-qualified as you are – hold beliefs contrary to yours? This article describes motivations that push different philosophers towards opposite answers to this question. It identifies a key theoretical principle that divides current writers on the epistemology of disagreement. It then examines arguments bearing on that principle, and on the wider issue. It ends by describing some outstanding questions (...) that thinking about this issue raises. (shrink)
‘There is no place in the phenomenology of fully absorbed coping’, writes Hubert Dreyfus, ‘for mindfulness. In flow, as Sartre sees, there are only attractive and repulsive forces drawing appropriate activity out of an active body’1. Among the many ways in which history animates dynamical systems at a range of distinctive timescales, the phenomena of embodied human habit, skilful movement, and absorbed coping are among the most pervasive and mundane, and the most philosophically puzzling. In this essay we examine both (...) habitual and skilled movement, sketching the outlines of a multidimensional framework within which the many differences across distinctive cases and domains might be fruitfully understood. Both the range of movement phenomena which can plausibly be seen as instances of habit or skill, and the space of possible theories of such phenomena are richer and more disparate than philosophy easily encompasses. We seek to bring phenomenology into contact with relevant movements in psychological theories of skilful action, in the belief that phenomenological philosophy and cognitive science can be allies rather than antagonists. (shrink)
It has often been noticed that conciliatory views of disagreement are "self-undermining" in a certain way: advocates of such views cannot consistently maintain them when other philosophers disagree. This leads to apparent problems of instability and even inconsistency. Does self-undermining, then, show conciliationism untenable? If so, the untenablity would extend not only to almost all views of disagreement, but to a wide range of other views supporting what one might call epistemic modesty: roughly, the idea that getting evidence that one (...) has made an epistemic error in arriving at one’s opinion may require adjusting that opinion. This paper argues that the phenomenon of self-undermining does not disclose any defect in views mandating epistemic modesty. Instead, it highlights an uncomfortable but natural consequence of reflecting on one's own possible epistemic imperfections, a sort of reflection that tends to cause epistemic ideals to conflict. (shrink)
This paper investigates how deans and directors at the top 50 global MBA programs (as rated by the "Financial Times" in their 2006 Global MBA rankings) respond to questions about the inclusion and coverage of the topics of ethics, corporate social responsibility, and sustainability at their respective institutions. This work purposely investigates each of the three topics separately. Our findings reveal that: (1) a majority of the schools require that one or more of these topics be covered in their MBA (...) curriculum and one-third of the schools require coverage of all three topics as part of the MBA curriculum, (2) there is a trend toward the inclusion of sustainability-related courses, (3) there is a higher percentage of student interest in these topics (as measured by the presence of a Net Impact club) in the top 10 schools, and (4) several schools are teaching these topics using experiential learning and immersion techniques. We note a fivefold increase in the number of stand-alone ethics courses since a 1988 investigation on ethics, and we include other findings about institutional support of centers or special programs; as well as a discussion of integration, teaching techniques, and notable practices in relation to all three topics. (shrink)
There is a widespread view that well-learned skills are automated, and that attention to the performance of these skills is damaging because it disrupts the automatic processes involved in their execution. This idea serves as the basis for an account of choking in high pressure situations. On this view, choking is the result of self-focused attention induced by anxiety. Recent research in sports psychology has produced a significant body of experimental evidence widely interpreted as supporting this account of choking in (...) certain kinds of complex sensorimotor skills. We argue against this interpretation, pointing to problems with both the empirical evidence and the underlying theory. The experimental research fails to provide direct support for the central claims of the self-focus approach, contains inconsistencies, and suffers from problems of ecological validity. In addition, qualitative studies of choking have yielded contrary results. We further argue that in their current forms the self-focus and rival distraction approaches both lack the theoretical resources to provide a good theory of choking, and we argue for an expanded approach. Some of the elements that should be in an expanded approach include accounts of the features of pressure situations that influence the psychological response, the processes of situation appraisal, and the ways that attentional control can be overwhelmed, leading to distraction in some cases, and in others, perhaps, to damaging attention to skill execution. We also suggest that choking may sometimes involve performance-impairing mechanisms other than distraction or self-focus. (shrink)
We present a synthetic theory of skilled action which proposes that cognitive processes make an important contribution to almost all skilled action, contrary to influential views that many skills are performed largely automatically. Cognitive control is focused on strategic aspects of performance, and plays a greater role as difficulty increases. We offer an analysis of various forms of skill experience and show that the theory provides a better explanation for the full set of these experiences than automatic theories. We further (...) show that the theory can explain experimental evidence for skill automaticity, including evidence that secondary tasks do not interfere with expert performance, and evidence that experts have reduced memory for performance of sensorimotor skills. (shrink)
Formally-inclined epistemologists often theorize about ideally rational agents--agents who exemplify rational ideals, such as probabilistic coherence, that human beings could never fully realize. This approach can be defended against the well-know worry that abstracting from human cognitive imperfections deprives the approach of interest. But a different worry arises when we ask what an ideal agent should believe about her own cognitive perfection (even an agent who is in fact cognitively perfect might, it would seem, be uncertain of this fact). Consideration (...) of this question reveals an interesting feature of the structure of our epistemic ideals: for agents with limited information, our epistemic ideals turn out to conflict with one another. (shrink)
Many people have a strong intuition that there is something morally objectionable about playing violent video games, particularly with increases in the number of people who are playing them and the games' alleged contribution to some highly publicized crimes. In this paper,I use the framework of utilitarian, deontological, and virtue ethical theories to analyze the possibility that there might be some philosophical foundation for these intuitions. I raise the broader question of whether or not participating in authentic simulations of immoral (...) acts in generalis wrong. I argue that neither the utilitarian, nor the Kantian has substantial objections to violent game playing, although they offer some important insights into playing games in general and what it is morally to be a ``good sport.'' The Aristotelian, however, has a plausible and intuitive way to protest participation in authentic simulations of violent acts in terms of character: engaging in simulated immoral actserodes one's character and makes it more difficult for one to live a fulfilled eudaimonic life. (shrink)
Expert skill in music performance involves an apparent paradox. On stage, expert musicians are required accurately to retrieve information that has been encoded over hours of practice. Yet they must also remain open to the demands of the ever-changing situational contingencies with which they are faced during performance. To further explore this apparent paradox and the way in which it is negotiated by expert musicians, this article profiles theories presented by Roger Chaffin, Hubert Dreyfus and Tony and Helga Noice. For (...) Chaffin, expert skill in music performance relies solely upon overarching mental representations, while, for Dreyfus, such representations are needed only by novices, while experts rely on a more embodied form of coping. Between Chaffin and Dreyfus sit the Noices, who argue that both overarching cognitive structures and embodied processes underlie expert skill. We then present the Applying Intelligence to the Reflexes (AIR) approach?a differently nuanced model of expert skill aligned with the integrative spirit of the Noices? research. The AIR approach suggests that musicians negotiate the apparent paradox of expert skill via a mindedness that allows flexibility of attention during music performance. We offer data from recent doctoral research conducted by the first author of this article to demonstrate at a practical level the usefulness of the AIR approach when attempting to understand the complexities of expert skill in music performance. (shrink)
How much should your confidence in your beliefs be shaken when you learn that others – perhaps ‘epistemic peers’ who seem as well-qualified as you are – hold beliefs contrary to yours? This article describes motivations that push different philosophers towards opposite answers to this question. It identifies a key theoretical principle that divides current writers on the epistemology of disagreement. It then examines arguments bearing on that principle, and on the wider issue. It ends by describing some outstanding questions (...) that thinking about this issue raises. (shrink)
Much work on the sense of agency has focused either on abnormal cases, such as delusions of control, or on simple action tasks in the laboratory. Few studies address the nature of the sense of agency in complex natural settings, or the effect of skill on the sense of agency. Working from 2 case studies of mountain bike riding, we argue that the sense of agency in high-skill individuals incorporates awareness of multiple causal influences on action outcomes. This allows fine-grained (...) differentiation of the contributions of self and external factors to action outcomes. We further argue that the sense of agency incorporates prospective awareness of actions that are possible in a situation and awareness of the limits of control. These forms of sense of agency enable highly flexible, context-sensitive strategic control, and are likely to contribute to high interindividual variability in responses to complex tasks. (shrink)
We often get evidence concerning the reliability of our own thinking about some particular matter. This “higher-order evidence” can come from the disagreement of others, or from information about our being subject to the effects of drugs, fatigue, emotional ties, implicit biases, etc. This paper examines some pros and cons of two fairly general models for accommodating higher-order evidence. The one that currently seems most promising also turns out to have the consequence that epistemic akrasia should occur more frequently than (...) is sometimes supposed. But it also helps us see why this might not be a bad thing. (shrink)
A number of philosophers, from Thomas Reid1 through C. A. J. Coady2, have argued that one is justified in relying on the testimony of others, and furthermore, that this should be taken as a basic epistemic presumption. If such a general presumption were not ultimately dependent on evidence for the reliability of other people, the ground for this presumption would be a priori. Such a presumption would then have a status like that which Roderick Chisholm claims for the epistemic principle (...) that we are justified in believing what our senses tell us. (shrink)
This paper outlines an original interactivist-constructivist approach to modelling intelligence and learning as a dynamical embodied form of adaptiveness and explores some applications of I-C to understanding the way cognitive learning is realized in the brain. Two key ideas for conceptualizing intelligence within this framework are developed. These are: intelligence is centrally concerned with the capacity for coherent, context-sensitive, self-directed management of interaction; and the primary model for cognitive learning is anticipative skill construction. Self-directedness is a capacity for integrative process (...) modulation which allows a system to "steer" itself through its world by anticipatively matching its own viability requirements to interaction with its environment. Because the adaptive interaction processes required of intelligent systems are too complex for effective action to be prespecified learning is an important component of intelligence. A model of self-directed anticipative learning is formulated based on interactive skill construction, and argued to constitute a central constructivist process involved in cognitive development. SDAL illuminates the capacity of intelligent learners to start with the vague, poorly defined problems typically posed in realistic learning situations and progressively refine them, transforming them into problems with sufficient structure to guide the construction of a solution. Finally, some of the implications of I-C for modelling of the neuronal basis of intelligence and learning are explored; in particular, Quartz and Sejnowski's recent neural constructivism paradigm, enriched by Montague and Sejnowski's dopaminergic model of anticipative-predictive neural learning, is assessed as a promising, but incomplete, contribution to this approach. The paper concludes with a fourfold reflection on the divergence in cognitive modelling philosophy between the I-C and the traditional computational information processing approaches. (shrink)
In policy-making the consumption of specially labelled products, and its role in improving the welfare of livestock, has attracted considerable attention. There is in many countries a diverse market for animal welfare-friendly products which is potentially confusing and may lack transparency. We ask whether special quality labels that involve medium levels of animal welfare, as compared with labels promoting premium levels of animal welfare, have a role to play in promoting improvements in animal welfare. The Danish pork market is our (...) reference case, but we also widen the context by comparing the markets for pork in three other European countries. Our findings suggest that in order to improve animal welfare through demand for welfare-friendly products it is important to maintain separate the market for products with strong animal welfare profiles from markets for products with medium levels of animal welfare where, often, animal welfare is bundled together with other food quality attributes. We conclude that such quality labels may indeed play an important role in promoting higher animal welfare standards provided that they offer real improvements in animal welfare as compared with standard products. They will be attractive to consumers with a positive, but not especially strong interest in animal welfare as an individual food attribute who would otherwise be inclined to purchase standard products. (shrink)
The most immediately appealing model for formal constraints on degrees of belief is provided by probability theory, which tells us, for instance, that the probability of P can never be greater than that of (P v Q). But while this model has much intuitive appeal, many have been concerned to provide arguments showing that ideally rational degrees of belief would conform to the calculus of probabilities. The arguments most frequently used to make this claim plausible are the so-called "Dutch Book" (...) arguments. (shrink)
It is obvious that we would not want to demand that an agent' s beliefs at different times exhibit the same sort of consistency that we demand from an agent' s simultaneous beliefs; there' s nothing irrational about believing P at one time and not-P at another. Nevertheless, many have thought that some sort of coherence or stability of beliefs over time is an important component of epistemic rationality.
One of Mill’s main arguments for free speech springs from taking disagreement as an epistemically valuable resource for fallible thinkers. Contemporary conciliationist treatments of disagreement spring from the same motivation, but end up seeing the epistemic implications of disagreement quite differently. Conciliationism also encounters complexities when transposed from the 2-person toy examples featured in the literature to the public disagreements among groups that give the issue much of its urgency. Group disagreements turn out to be in some ways more powerful (...) defeaters of rational belief, even when opposing groups are comparable in size and epistemic credentials. And conciliationism also shows us why determining the rational response to these disagreements can in certain cases (e.g. politics) be a particularly difficult and nuanced matter. (shrink)
In recent years, a number of approaches to social cognition research have emerged that highlight the importance of embodied interaction for social cognition (Reddy, How infants know minds, 2008; Gallagher, J Conscious Stud 8:83–108, 2001; Fuchs and Jaegher, Phenom Cogn Sci 8:465–486, 2009; Hutto, in Seemans (ed.) Joint attention: new developments in psychology, philosophy of mind and social neuroscience, 2012). Proponents of such ‘interactionist’ approaches emphasize the importance of embodied responses that are engaged in online social interaction, and which, according (...) to interactionists, present an alternative to mindreading as a source of social understanding. We agree that it is important to take embodied interaction seriously, but do not agree that this presents a fundamental challenge to mainstream mindreading approaches. Drawing upon an analogy between embodied interaction and the exercise of expert skills, we advocate a hierarchical view which claims that embodied social responses generally operate in close conjunction with higher-level cognitive processes that play a coordinative role, and which are often sensitive to mental states. Thus, investigation of embodied responses should inform rather than conflict with research on mindreading. (shrink)
Many research ethics guidelines now oblige researchers to offer research participants the results of research in which they participated. This practice is intended to uphold respect for persons and ensure that participants are not treated as mere means to an end. Yet some scholars have begun to question a generalised duty to disclose research results, highlighting the potential harms arising from disclosure and questioning the ethical justification for a duty to disclose, especially with respect to individual results. In support of (...) this view, we argue that current rationales for a duty of disclosure do not form an adequate basis for an ethical imperative. We review policy guidance and scholarly commentary regarding the duty to communicate the results of biomedical, epidemiological and genetic research to research participants and show that there is wide variation in opinion regarding what should be disclosed and under what circumstance. Moreover, we argue that there is fundamental confusion about the notion of “research results,” specifically regarding three core concepts: the distinction between aggregate and individual results, amongst different types of research, and across different degrees of result veracity. Even where policy guidance and scholarly commentary have been most forceful in support of an ethical imperative to disclose research results, ambiguity regarding what is to be disclosed confounds ethical action. (shrink)
Lei Zhong (2012. Counterfactuals, regularity and the autonomy approach. Analysis 72: 75–85) argues that non-reductive physicalists cannot establish the autonomy of mental causation by adopting a counterfactual theory of causation since such a theory supports a so-called downward causation argument which rules out mental-to-mental causation. We respond that non-reductive physicalists can consistently resist Zhong's downward causation argument as it equivocates between two familiar notions of a physical realizer.
In attempting to improve ethical decision-making in business organizations, researchers have developed models of ethical decision-making processes. Most of these models do not include a role for law in ethical decision-making, or if law is mentioned, it is set as a boundary constraint, exogenous to the decision process. However, many decision models in business ethics are based on cognitive moral development theory, in which the law is thought to be the external referent of individuals at the level of cognitive development (...) that most people have achieved. Other theoretical bases of ethical decision models, social learning, and experientialism, also imply a role for law that is rarely made explicit. Law is a more important aspect of ethical decision-process models than it appears to be in the models. This paper will derive explicit roles for the law from the cognition, experientialism, and social learning theories that are used to build ethical decision-making models for business behavior. (shrink)
The processes associated with globalization have reinforced and even increased prevailing conditions of inequality among human beings with respect to their political, economic, cultural, and social opportunities. Yet-or perhaps precisely because of this trend-there has been, within political philosophy, an observable tendency to question whether equality in fact should be treated a as central value within a theory of justice. In response, I examine a number of nonegalitarian positions to try to show that the concept of equality cannot be dispensed (...) with in any adequate consideration of justice. (shrink)
Both Representation Theorem Arguments and Dutch Book Arguments support taking probabilistic coherence as an epistemic norm. Both depend on connecting beliefs to preferences, which are not clearly within the epistemic domain. Moreover, these connections are standardly grounded in questionable definitional/metaphysical claims. The paper argues that these definitional/metaphysical claims are insupportable. It offers a way of reconceiving Representation Theorem arguments which avoids the untenable premises. It then develops a parallel approach to Dutch Book Arguments, and compares the results. In each case (...) preferencedefects serve as a diagnostic tool, indicating purely epistemic defects. (shrink)