I explore two accounts of properties within a dispositional essentialist (or causal powers) framework, the pure powers view and the powerful qualities view. I ﬁrst attempt to clarify precisely what the pure powers view is, and then raise objections to it. I then present the powerful qualities view and, in order to avoid a common misconception, oﬀer a restatement of it that I shall call the truthmaker view. I end by brieﬂy defending the truthmaker view against objections.
Possible worlds, concrete or abstract as you like, are irrelevant to the truthmakers for modality—or so I shall argue in this paper. First, I present the neo-Humean picture of modality, and explain why those who accept it deny a common sense view of modality. Second, I present what I take to be the most pressing objection to the neo-Humean account, one that, I argue, applies equally well to any theory that grounds modality in possible worlds. Third, I present an alternative, (...) properties-based theory of modality and explore several specific ways to flesh the general proposal out, including my favored version, the powers theory. And, fourth, I offer a powers semantics for counterfactuals that each version of the properties-based theory of modality can accept, mutatis mutandis. Together with a definition of possibility and necessity in terms of counterfactuals, the powers semantics of counterfactuals generates a semantics for modality that appeals to causal powers and not possible worlds. (shrink)
Western society today is less unified by a set of core values than ever before. Undoubtedly, the concept of moral consensus is a difficult one in a liberal, democratic and pluralistic society. But it is imperative to avoid a rigid majoritarianism where sensitive personal values are at stake, as in bioethics. Bioethics has become an influential part of public and professional discussions of health care. It has helped frame issues of moral values and medicine as part of a more general (...) effort to find consensus about some of the most perplexing questions of our time. But why is it thought that a moral consensus is important or that it deserves respect? How does moral consensus acquire legitimacy in a society that includes diverse value systems? How is moral consensus possible and how do small groups help create or distort consensus processes? Written by a medical school professor trained in philosophy, this timely work tackles these questions from philosophical, historical, and social scientific standpoints. It begins by describing the traditional ambivalence about consensus in Western culture as well as the uncertain relationship in modernity between consensus and expertise. After outlining the current bioethical consensus, the book gives philosophical and political analyses of the idea of consensus, then assesses the role of consensus in national ethics commissions and in the ethics committee movement. Moreno constructs an original, naturalistic philosophy of moral consensus, referred to as "bioethical naturalism", and then applies sociology and social psychology to actual consensus processes. The book concludes with an account of bioethics as a consensus-oriented social reform movement. This insightful volume will be essential reading for bioethicists, philosophers, physicians, members of ethics committees, and all those concerned with ethical and social issues in health care. (shrink)
It is often assumed that, while ordinary actions are events, ‘negative actions’ are absences of events. I claim that a negative action is an ordinary, ‘positive’ event that plays a certain role. I argue that my approach can answer standard objections to the identity of negative actions and positive events.
The paper is devoted to the study of humor as an important pragmatic phenomenon bearing on cognition, and, more specifically, as a cooperative mode of non-bona-fide communication. Several computational models of humor are presented in increasing order of complexity and shown to reveal important cognitive structures in jokes. On the basis of these limited implementations, the concept of a full-fledged computational model for the understanding and generation of humor is introduced and discussed in various aspects. The model draws upon the (...) authors ' General Theory of Verbal Humor, with its six knowledge resources informing a joke, and on SMEARR, a sophisticated semantic-network-based computational lexical environment. The relevance of the approach to the interpretation, generation, and cognitive structure of humor is discussed in the broader context of the nature of the cooperative non-bona-fide modes of communication. (shrink)
Freedom and moral responsibility have one foot in the practical realm of human affairs and the other in the esoteric realm of fundamental metaphysics—or so we believe. This has been denied, especially in the metaphysics-bashing era occupying the first two-thirds or so of the twentieth century, traces of which linger in the present day. But the reasons for this denial seem to us quite implausible. Certainly, the argument for the general bankruptcy of metaphysics has been soundly discredited. Arguments from Strawson (...) and others that our moral practices are too deeply embedded in human life to rest on anything as tenuous as a metaphysical doctrine far from the thoughts of ordinary people would seem to prove too much: we can easily imagine fantastic scenarios far from the thoughts of ordinary people—involving, say, alien manipulation or massive deception—that, if true, would clearly undermine claims to freedom and responsibility. For still other philosophers, the separation of the moral life from (some) metaphysical issues is prescriptive, not descriptive: it is a recommendation that we revise ordinary moral thought by severing its allegedly problematic links to metaphysics. (Some philosophers appear to hover undecided between such a prescriptive project and a Strawsonian descriptive claim.) We suspect that the prospects of retaining the binding force of ordinary moral thought, were such a reconceived moral practice widely embraced, are bleak. A transition to something closer to moral nihilism seems at least as likely. In any case, our interest here is in descriptive metaphysics, not revisionary. -/- To say as we do that freedom and moral responsibility have a partly metaphysical character is not to suggest that they can be had only if some highly specific version of a particular metaphysical framework is correct. Instead, we suggest in what follows, it is a broadly neo-Humean metaphysics that is not hospitable to freedom (for reasons distinctive to the metaphysics), while a broadly neo-Aristotelian metaphysics is. But we also think (and it is the main aim of our paper to show) that different versions of the neo- Aristotelian metaphysics lead to rather different metaphysical accounts of free and responsible action. Specifically, we will argue that (1) the most satisfactory account of human freedom within the broadly neo-Aristotelian metaphysics is agent-causal, but that (2) two different versions of the general metaphysics will lead to important differences in the agent-causal account of freedom. Adjust the details of your general metaphysics, and the details of your account of freedom are transformed in significant ways. Action theory cannot properly be pursued in isolation from general metaphysics. (shrink)
Children and adults from theus and China heard about people who died in two types of narrative contexts – medical and religious – and judged whether their psychological and biological capacities cease or persist after death. Most 5- to 6-year-olds reported that all capacities would cease. In theus, but not China, there was an increase in persistence judgments at 7–8 years, which decreased thereafter.uschildren’s persistence judgments were influenced by narrative context – occurring more often for religious narratives – and such (...) judgments were made especially for psychological capacities. When participants were simply asked what happens to people following death, in both countries there were age-graded increases in references to burial, religious ritual, and the supernatural. (shrink)
This article is based on a public lecture hosted by the Monash University Centre for Human Bioethics in Melbourne, Australia on 11 April 2013. The lecture recording was transcribed by Vicky Ryan; and, the original transcript has been edited — for clarity and brevity — by Vicky Ryan, Michael Selgelid and Jonathan Moreno.
The prefrontal cortex has long been suspected to play an important role in cognitive control, in the ability to orchestrate thought and action in accordance with internal goals. Its neural basis, however, has remained a mystery. Here, we propose that cognitive control stems from the active maintenance of patterns of activity in the prefrontal cortex that represent goals and the means to achieve them. They provide bias signals to other brain structures whose net effect is to guide the flow of (...) activity along neural pathways that establish the proper mappings between inputs, internal states, and outputs needed to perform a given task. We review neurophysiological, neurobiological, neuroimaging, and computational studies that support this theory and discuss its implications as well as further issues to be addressed. (shrink)
The generality problem is perhaps the most notorious problem for process reliabilism. Several recent responses to the generality problem have claimed that the problem has been unfairly leveled against reliabilists. In particular, these responses have claimed that the generality problem is either (i) just as much of a problem for evidentialists, or (ii) if it is not, then a parallel solution is available to reliabilists. Along these lines, Juan Comesaña has recently proposed solution to the generality problem—well-founded reliabilism. According to (...) Comesaña, the solution to the generality problem lies in solving the basing problem, such that any solution to the basing problem will give a solution to the generality problem. Comesaña utilizes Conee and Feldman’s evidentialist account of basing (Conee and Feldman’s well-foundedness principle) in forming his version of reliabilism. In this paper I show that Comesaña’s proposed solution to the generality problem is inadequate. Well-founded reliabilism both fails to solve the generality problem and subjects reliabilism to new damning verdicts. In addition, I show that evidentialism does not face any parallel problems, so the generality problem remains a reason to prefer evidentialism to reliabilism. (shrink)
We use concepts of causal powers and their relatives-dispositions, capacities, and abilities-to describe the world around us, both in everyday life and in scientific practice. This volume presents new work on the nature of causal powers, and their connections with other phenomena within metaphysics, philosophy of science, and philosophy of mind.
The notion of essence in psychology is examined from a constructivist viewpoint. The constructivist position is summarized and differentiated from social constructionism, after which constructs are distinguished from concepts in order to position ontology and epistemology as modes of construing. After situating constructivism in relation to philosophical approaches to essences, the distinction between essences and kinds is examined and the presumed constructivist critique of essences in psychology outlined. It is argued that criticizing constructivism as an “anything goes” form of antirealism (...) fails to grasp how constructivist psychology, by emphasizing structure and viability, does indeed place limits on the constructions people may hold. In applying a constructivist understanding of essences in general to those fundamental to human psychology, people can be seen as having three essential psychological qualities: they are closed systems, active meaning-makers, and irreducibly social beings. Yet a constructivist view also maintains that these psychological essences only hold while operating within and committed to a constructivist perspective. In other words, what counts as an essence always depends on one's assumptions, or how one construes events. Finally, a personal construct theory model of essentialist and nonessentialist construing is introduced based on the assumption that everyone construes in both essentialist and nonessentialist ways at different times because doing so is pragmatically viable. 2012 APA, all rights reserved). (shrink)
One hypothesis concerning the human dorsal anterior cingulate cortex (ACC) is that it functions, in part, to signal the occurrence of conflicts in information processing, thereby triggering compensatory adjustments in cognitive control. Since this idea was first proposed, a great deal of relevant empirical evidence has accrued. This evidence has largely corroborated the conflict-monitoring hypothesis, and some very recent work has provided striking new support for the theory. At the same time, other findings have posed specific challenges, especially concerning the (...) way the theory addresses the processing of errors. Recent research has also begun to shed light on the larger function of the ACC, suggesting some new possibilities concerning how conflict monitoring might fit into the cingulate's overall role in cognition and action. (shrink)
Consensus is commonly identified as the goal of ethics committee deliberation, but it is not clear what is morally authoritative about consensus. Various problems with the concept of an ethics committee in a health care institution are identified. The problem of consensus is placed in the context of the debate about realism in moral epistemology, and this is shown to be of interest for ethics committees. But further difficulties, such as the fact that consensus at one level of discourse need (...) not imply consensus at another, oblige us to look more closely at the deliberative process itself. That yields two complementary methods of deliberation that have proven their worth. Finally, placing ethics committees in the context of Dewey's philosophy of social intelligence suggests that consensus should be regarded primarily as a condition rather than as the goal of inquiry. Keywords: ethics committee, consensus, moral authority CiteULike Connotea Del.icio.us What's this? (shrink)
Substantial efforts have recently been made to reform the physician-patient relationship, particularly toward replacing the `silent world of doctor and patient' with informed patient participation in medical decision-making. This 'new ethos of patient autonomy' has especially insisted on the routine provision of informed consent for all medical interventions. Stronly supported by most bioethicists and the law, as well as more popular writings and expectations, it still seems clear that informed consent has, at best, been received in a lukewarm fashion by (...) most clinicians, many simply rejecting what they commonly refer to as the `myth of informed consent'. The purpose of this book is to defuse this seemingly intractable controversy by offering an efficient and effective operational model of informed consent. This goal is pursued first by reviewing and evaluating, in detail, the agendas, arguments, and supporting materials of its proponents and detractors. A comprehensive review of empirical studies of informed consent is provided, as well as a detailed reflection on the common clinician experience with attempts at informed consent and the exercise of autonomy by patients. In the end, informed consent is recast as a management tool for pursuing clinically and ethically important goods and values that any clinician should see as meriting pursuit. Concurrently, the model incorporates a flexible, anticipatory approach that recognizes that no static, generic ritual can legitimately pursue the quite variable goods and values that may be at stake with different patients in different situations. Finally, efficiency of provision is addressed by not pursuing the unattainable and ancillary. Throughout, the traditional principle of beneficence is appealed to toward articulating an operational model of informed consent as an intervention that is likely to change outcomes at the bedside for the better. (shrink)
Recently Trent Dougherty has claimed that there is a tension between skeptical theism and common sense epistemology—that the more plausible one of these views is, the less plausible the other is. In this paper I explain Dougherty’s argument and develop an account of defeaters which removes the alleged tension between skeptical theism and common sense epistemology.
There is consensus that children have questionable decisional capacity and, therefore, in general a parent or a guardian must give permission to enroll a child in a research study. Moreover, freedom from duress and coercion, the cardinal rule in research involving adults, is even more important for children. This principle is embodied prominently in the Nuremberg Code (1947) and is embodied in various federal human research protection regulations. In a program named "SATURN" (Student Athletic Testing Using Random Notification), each school (...) in the Oregon public-school system may implement a mandatory drug-testing program for high school student athletes. A prospective study to identify drug use among student-athletes, SATURN is designed both to evaluate the influence of random drug testing and to validate the survey data through identification of individuals who do not report drug use. The enrollment of students in the drug-testing study is a requirement for playing a school sport. In addition to the coercive nature of this study design, there were ethically questionable practices in recruitment, informed consent, and confidentiality. This article concerns the question of whether research can be conducted with high school students in conjunction with a mandatory drug-testing program, while adhering to prevailing ethical standards regarding human-subjects research and specifically the participation of children in research. (shrink)
Although the neoconservative movement has come to dominate American conservatism, this movement has its origins in the old Marxist Left. Communists in their younger days, as the founders of neoconservatism, inverted Marxist doctrine by arguing that moral values and not economic forces were the primary movers of history. Yet the neoconservative critique of biotechnology still borrows heavily from Karl Marx and owes more to the German philosopher Martin Heidegger than to the Scottish philosopher and political economist Adam Smith. Loath to (...) identify these sources - or perhaps unaware of them - neoconservatives do not acknowledge these intellectual underpinnings or their implications. Thus, in the final analysis, their critique is incoherent and even internally inconsistent. By not acknowledging and embracing their intellectual roots, neoconservatives are left with a deeply ambivalent and often confused view of biotechnology and the society that gives rise to it. (shrink)
We present an original emergent individuals view of human persons, on which persons are substantial biological unities that exemplify metaphysically emergent mental states. We argue that this view allows for a coherent model of identity-preserving resurrection from the dead consistent with orthodox Christian doctrine, one that improves upon alternatives accounts recently proposed by a number of authors. Our model is a variant of the “falling elevator” model advanced by Dean Zimmerman that, unlike Zimmerman’s, does not require a closest continuer account (...) of personal identity. We end by raising some remaining theological concerns. (shrink)
Causal powers, say, an electron’s power to repel other electrons, are had in virtue of having properties. Electrons repel other electrons because they are negatively charged. One’s views about causal powers are shaped by—and shape—one’s views concerning properties, causation, laws of nature and modality. It is no surprise, then, that views about the nature of causal powers are generally embedded into larger, more systematic, metaphysical pictures of the world. This dissertation is an exploration of three systematic metaphysics, Neo-Humeanism, Nomicism and (...) Neo-Aristotelianism. I raise problems for the first two and defend the third. A defense of a systematic metaphysics, I take it, involves appealing to pre-theoretical commitments or intuitions, and theoretical issues such as simplicity or explanatory power. While I think that Neo-Aristotelianism is the most intuitive of the available general metaphysical pictures of the world, these kinds of intuitions do not settle the matter. The most widely held of the alternative pictures, Neo-Humeanism, is accepted in great part because of its theoretical power. In contrast, a systematic Neo-Aristotelian metaphysic is, at best, nascent. The way forward for the Neo-Aristotelian, therefore, is a contribution to an ongoing research program, generating Neo-Aristotelian views of modality, causation and laws of nature from the Neo-Aristotelian understanding of causal powers. The central argument of this dissertation is that such views are defensible, and so the Neo-Aristotelian metaphysic ought to be accepted. (shrink)
Bioethics as a field has been fortunate that its values and concerns have mirrored the values and concerns of society. In light of the September 11th attacks, it is possible that we are witnessing the beginning of a transition in American culture, one fraught with implications for bioethics. The emphasis on autonomy and individual rights may come to be tempered by greater concern over the collective good. Increased emphasis on solidarity over autonomy could greatly alter public response to research abuses (...) aimed at defense from bioterrorism, to privacy of genetic information, and to control of private medical resources to protect the public health. (shrink)
It is typically assumed that actions are events, but there is a growing consensus that negative actions, like omissions and refrainments, are not events, but absences thereof. If so, then we must either deny the obvious, that we can exercise our agency by omitting and refrainment, or give up on event-based theories of agency. I trace the consensus to the assumption that negative action sentences are negative-existentials, and argue that this is false. The best analysis of negative action sentences treats (...) them as quantifying over omissions and refrainments, conceived of as events. (shrink)
Henry Knowles Beecher, an icon of human research ethics, and Timothy Francis Leary, a guru of the counterculture, are bound together in history by the synthetic hallucinogen lysergic acid diethylamide. Beecher was a U.S. Army Lieutenant Colonel who received five battle stars, was inducted into the Legion of Merit, held the first endowed chair in his discipline, wrote at least three path-breaking papers, and is honored by two prestigious ethics awards in his name. Leary was a West Point dropout who (...) was obliged to leave a research assistant professorship, was convicted of violating the Marihuana Tax Act, was sentenced to 20 years in prison and broke out with the... (shrink)
Understanding how neurons represent, process, and manipulate information is one of the main goals of neuroscience. These issues are fundamentally abstract, and information theory plays a key role in formalizing and addressing them. However, application of information theory to experimental data is fraught with many challenges. Meeting these challenges has led to a variety of innovative analytical techniques, with complementary domains of applicability, assumptions, and goals.
Justifying his proposal for “health savings accounts,” which would allow individuals to set aside tax-free dollars against future healthcare needs, President Bush has said that “Health savings accounts all aim at empowering people to make decisions for themselves.” Who could disagree with such a sentiment? Although bioethicists may be among those who express skepticism that personal health savings accounts will be part of the needed “fix” of our healthcare financing system, self determination has long been part of their mantra. Indeed, (...) the field of bioethics played an important role in advancing this idea in the medical world when physician paternalism was regnant. Has its popularity caused it to become so vapid as to be ripe for misuse? (shrink)
Nanotechnologies and nanoscience have generated an unprecedented global research and development race involving dozens of countries. The understanding of associated environmental, ethical, and societal implications lags far behind the science and technology. Consequently, it is critical to consider both what is known and what is unknown to offer a kernel that future work can be added to. The challenges presented by nanotechnologies are discussed. Some initial solutions such as self-regulation and borrowing techniques and tools from other fields are accompanied by (...) a call for further research. (shrink)
Is human cognition best described by optimal models, or by adaptive but suboptimal heuristic strategies? It is frequently hard to identify which theoretical model is normatively best justified. In the context of information search, naoptimal” models.
Searching for information is critical in many situations. In medicine, for instance, careful choice of a diagnostic test can help narrow down the range of plausible diseases that the patient might have. In a probabilistic framework, test selection is often modeled by assuming that people’s goal is to reduce uncertainty about possible states of the world. In cognitive science, psychology, and medical decision making, Shannon entropy is the most prominent and most widely used model to formalize probabilistic uncertainty and the (...) reduction thereof. However, a variety of alternative entropy metrics are popular in the social and the natural sciences, computer science, and philosophy of science. Particular entropy measures have been predominant in particular research areas, and it is often an open issue whether these divergences emerge from different theoretical and practical goals or are merely due to historical accident. Cutting across disciplinary boundaries, we show that several entropy and entropy reduction measures arise as special cases in a unified formalism, the Sharma-Mittal framework. Using mathematical results, computer simulations, and analyses of published behavioral data, we discuss four key questions: How do various entropy models relate to each other? What insights can be obtained by considering diverse entropy models within a unified framework? What is the psychological plausibility of different entropy models? What new questions and insights for research on human information acquisition follow? Our work provides several new pathways for theoretical and empirical research, reconciling apparently conflicting approaches and empirical findings within a comprehensive and unified information-theoretic formalism. (shrink)