Many decisions involve multiple stages of choices and events, and these decisions can be represented graphically as decision trees. Optimal decision strategies for decision trees are commonly determined by a backward induction analysis that demands adherence to three fundamental consistency principles: dynamic, consequential, and strategic. Previous research (Busemeyer et al. 2000, J. Exp. Psychol. Gen. 129, 530) found that decision-makers tend to exhibit violations of dynamic and strategic consistency at rates significantly higher than choice inconsistency across various levels of potential (...) reward. The current research extends these findings under new conditions; specifically, it explores the extent to which these principles are violated as a function of the planning horizon length of the decision tree. Results from two experiments suggest that dynamic inconsistency increases as tree length increases; these results are explained within a dynamic approachâavoidance framework. (shrink)
Why should modern philosophers read the works of R. G. Collingwood? His ideas are often thought difficult to locate in the main lines of development taken by twentieth-century philosophy. Some have read Collingwood as anticipating the later Wittgenstein, others have concentrated exclusively on the internal coherence of his thought. This work aims to introduce Collingwood to contemporary students of philosophy through direct engagement with his arguments. It is a conversation with Collingwood that takes as its subject matter the topics that (...) interested him 'philosophy and method, philosophy of mind, language and logic, the historical imagination, art and expression, action, metaphysics and life' and which still preoccupy us today. --the first introductory book on this major modern philosopher --includes critical investigation of his thought --there is no similar work available. (shrink)
While psychological egoism “A”, the theory that all human actions are selfish, is easily defeated, an alternative formulation, “B”, is defended: “AU deliberate human actions are either self-interested or self-referential.” While “B” is not empirically testable, neither is any alternative altruistic theory. “B” escapes criticisms leveled at “A”, including those of Joseph Butler. “B” is shown to be theoretically superior to any theory of altruism since it brings coherence to moral theory by explaining the nature of moraI motivation.
The author, head of a teaching hospital surgical unit, argues that the medical curriculum must ensure that all students are exposed to a minimum of ethical discussion and decision-making. In describing his own approach he emphasises the need to show students that it is 'an intensely practical subject'. Moreover, he reminds them that moral dilemmas in medicine--perhaps a better term than medical ethics--are unavoidable in clinical practice. Professor Johnson emphasises the need for small group teaching and discussion of real (...) cases, preferably chosen and 'worked up' by individual students. He suggests that ethical issues could profitably be introduced into written, oral and clinical examinations. (shrink)
We demonstrate that Statistical significance (Chow 1996) includes straw man arguments against (1) effect size, (2) meta-analysis, and (3) Bayesianism. We agree with the author that in experimental designs, H0 “is the effect of chance influences on the data-collection procedure . . . it says nothing about the substantive hypothesis or its logical complement” (Chow 1996, p. 41).
After reviewing portions of the 21st Century Nanotechnology Research and Development Act that call for examination of societal and ethical issues, this essay seeks to understand how nanoethics can play a role in nanotechnology development. What can and should nanoethics aim to achieve? The focus of the essay is on the challenges of examining ethical issues with regard to a technology that is still emerging, still ‘in the making.’ The literature of science and technology studies (STS) is used to understand (...) the nanotechnology endeavor in a way that makes room for influence by nanoethics. The analysis emphasizes: the contingency of technology and the many actors involved in its development; a conception of technology as sociotechnical systems; and, the values infused (in a variety of ways) in technology. Nanoethicists can be among the many actors who shape the meaning and materiality of an emerging technology. Nevertheless, there are dangers that nanoethicists should try to avoid. The possibility of being co-opted from working along side nanotechnology engineers and scientists is one danger that is inseparable from trying to influence. Related but somewhat different is the danger of not asking about the worthiness of the nanotechnology enterprise as a social investment in the future. (shrink)
In this paper, we focus attention on the role of computer system complexity in ascribing responsibility. We begin by introducing the notion of technological moral action (TMA). TMA is carried out by the combination of a computer system user, a system designer (developers, programmers, and testers), and a computer system (hardware and software). We discuss three sometimes overlapping types of responsibility: causal responsibility, moral responsibility, and role responsibility. Our analysis is informed by the well-known accounts provided by Hart and Hart (...) and Honoré. While these accounts are helpful, they have misled philosophers and others by presupposing that responsibility can be ascribed in all cases of action simply by paying attention to the free and intended actions of human beings. Such accounts neglect the part played by technology in ascriptions of responsibility in cases of moral action with technology. For both moral and role responsibility, we argue that ascriptions of both causal and role responsibility depend on seeing action as complex in the sense described by TMA. We conclude by showing how our analysis enriches moral discourse about responsibility for TMA. (shrink)
Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also argue that (...) adopting certain levels of abstraction out of context can be dangerous when the level of abstraction obscures the humans who constitute computer systems. We arrive at this critique of Floridi and Sanders by examining the debate over the moral status of computer systems using the notion of interpretive flexibility. We frame the debate as a struggle over the meaning and significance of computer systems that behave independently, and not as a debate about the ‘true’ status of autonomous systems. Our analysis leads to the conclusion that while levels of abstraction are useful for particular purposes, when it comes to agency and responsibility, computer systems should be conceptualized and identified in ways that keep them tethered to the humans who create and deploy them. (shrink)
After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...) computer systems have intentionality, and because of this, they should not be dismissed from the realm of morality in the same way that natural objects are dismissed. Natural objects behave from necessity; computer systems and other artifacts behave from necessity after they are created and deployed, but, unlike natural objects, they are intentionally created and deployed. Failure to recognize the intentionality of computer systems and their connection to human intentionality and action hides the moral character of computer systems. Computer systems are components in human moral action. When humans act with artifacts, their actions are constituted by the intentionality and efficacy of the artifact which, in turn, has been constituted by the intentionality and efficacy of the artifact designer. All three components – artifact designer, artifact, and artifact user – are at work when there is an action and all three should be the focus of moral evaluation. (shrink)
In this paper I use the concept of forbidden knowledge to explore questions about putting limits on science. Science has generally been understood to seek and produce objective truth, and this understanding of science has grounded its claim to freedom of inquiry. What happens to decision making about science when this claim to objective, disinterested truth is rejected? There are two changes that must be made to update the idea of forbidden knowledge for modern science. The first is to shift (...) from presuming that decisions to constrain or even forbid knowledge can be made from a position of omniscience (perfect knowledge) to recognizing that such decisions made by human beings are made from a position of limited or partial knowledge. The second is to reject the idea that knowledge is objective and disinterested and accept that knowledge (even scientific knowledge) is interested. In particular, choices about what knowledge gets created are normative, value choices. When these two changes are made to the idea of forbidden knowledge, questions about limiting or forbidding lines of inquiry are shown to distract attention from the more important matters of who makes and how decisions are made about what knowledge is produced. Much more attention should be focused on choosing directions in science, and as this is done, the matter of whether constraints should be placed on science will fall into place. (shrink)
This experiment examined the effects of three elements comprising Jones' (1991) moral intensity construct, (social consensus, personal proximity, and magnitude of consequences) in a cross-cultural comparison of ethical decision making within a human resource management (HRM) context. Results indicated social consensus had the most potent effect on judgments of moral concern and judgments of immorality. An analysis of American, Eastern European, and Indonesian responses also indicted socio-cultural differences were moderated by the type of HRM ethical issue. In addition, individual differences (...) in personal ethical ideology (relativism and idealism) varied reliably with moral judgments after controlling for issue characteristics and socio-cultural background. (shrink)
God alone is the true agreement of concept [Begriff ] and reality [Realität ]; all finite [endlichen] things involve some untruth [Unwahrheit], they have a concept and an existence [Existenz] which are incommensurable [unangemessen]. For this reason they inevitably go to ruin [zugrunde gehen], that the incommensurability [Unangemessenheit] of their concept and their existence may be evident [manifestiert]. The animal, as an individual, has its concept in the species [Gattung]; and its death [Tod] sets the species free from individuality [Einzelnheit]. (...) [§ 24, note 2]. (shrink)
As a new field, cognitivism began with the total rejection of the old, traditional views of language acquisition and of learning ─ individual and collective alike. Chomsky was one of the pioneers in this respect, yet he clouds issues by excessive claims for his originality and by not allowing the beginner in the art of the acquisition of language the use of learning by making hypotheses and testing them, though he acknowledges that researchers, himself included, do use this method. The (...) most important novelty of Chomsky's work is his idealization of the field by postulating the existence of the ideal speaker-hearer and his suggestion that the hidden structure of sentences is revealed by studying together all sentences that are logically equivalent to each other. This is progress, but his tests of equivalence are insufficient, as they all are within classical logic. This limitation rests on the greatest shortcoming of Chomsky's view, his idea that every sentence has one subject or subject-part, contrary to the claim of Frege and Russell that assertions involving relations (with two-place predicates) are structurally different from those involving properties (with one-place predicates). (See the Appendix below.). (shrink)
The practice of clinical medicine is inextricably linked with the need for moral values and ethical principles. The study of medical ethics is, therefore, rightly assuming an increasingly significant place in undergraduate and postgraduate medical courses and in allied health curricula. Making Sense of Medical Ethics offers a no-nonsense introduction to the principles of medical ethics, as applied to the everyday care of patients, the development of novel therapies and the undertaking of pioneering basic medical research. Written from a practical (...) rather than a philosophical perspective, the authors call upon their extensive experience of clinical practice, research and teaching to illustrate how ethical principles can be applied in different "real-life" situations. Making Sense of Medical Ethics encourages readers to understand the principles of medical ethics as they apply to clinical practice; explore and evaluate common misconceptions; consider the ethics underlying any medical decision; and as a result, to realize that a good appreciation of medical ethics will help them to practice more effectively in the future. (shrink)