In a famous experiment by Tversky and Kahneman (Psychol Rev 90:293–315, 1983), featuring Linda the bank teller, the participants assign a higher probability to a conjunction of propositions than to one of the conjuncts, thereby seemingly committing a probabilistic fallacy. In this paper, we discuss a slightly different example featuring someone named Walter, who also happens to work at a bank, and argue that, in this example, it is rational to assign a higher probability to the conjunction of suitably (...) chosen propositions than to one of the conjuncts. By pointing out the similarities between Tversky and Kahneman’s experiment and our example, we argue that the participants in the experiment may assign probabilities to the propositions in question in such a way that it is also rational for them to give the conjunction a higher probability than one of the conjuncts. (shrink)
Forty years of experimentation on class inclusion and its probabilistic relatives have led to inconsistent results and conclusions about human reasoning. Recent research on the conjunction "fallacy" recapitulates this history. In contrast to previous results, we found that a majority of participants adhere to class inclusion in the classic Lindaproblem. We outline a theoretical framework that attributes the contradictory results to differences in statistical sophistication and to differences in response mode-whether participants are asked for probability estimates or (...) ranks-and propose two precise cognitive algorithms for ranking probabilities. Our framework allows us to make novel predictions about when and why people adhere to class inclusion. Evidence obtained in several studies supports these predictions and demonstrates that the proposed ranking algorithms can account for about three-quarters of participants' inferences in the Lindaproblem. (shrink)
I argue that Linda Zagzebski's proposed solution to the Meno Problem faces serious challenges. The Meno Problem, roughly, is how to explain the value that knowledge, as such, has over mere true belief. Her proposed solution is that believings—when thought of more like actions—can have value in virtue of their motivations. This meshes nicely with her theory that knowledge is, essentially, virtuously motivated true belief. Her solution fails because it entails that, necessarily, all knowledge is motivated in (...) a way that resembles the motivation of actions. Crucially, Zagzebski says the value derived from motivation comes from certain laudable feelings—like love of truth (she is explicit that love is a feeling). But there are possible cases of knowledge—probably some of which are actual—in which subjects do not or cannot experience these feelings. (shrink)
The article argues that theorists who try to justify 'ought'-claims, i.e., who try to show that a standard of behavior has normative authority, will run into a regress problem. The problem is similar in structure to the familiar regress in the justification of belief. The point of the paper is not skeptical. Rather, the aim is to help theorists better understand the challenges associated with formulating a theory of normative authority.
This paper argues that reliabilism can handle Gettier cases once it restricts knowledge producing reliable processes to those that involve a suitable causal link between the subject’s belief and the fact it references. Causal tracking reliabilism (as this version of reliabilism is called) also avoids the problems that refuted the causal theory of knowledge, along with problems besetting more contemporary theories (such as virtue reliabilism and the “safety” account of knowledge). Finally, causal tracking reliabilism allows for a response to (...) class='Hi'>Linda Zagzebski’s challenge that no theory of knowledge can both eliminate the possibility of Gettier cases while also allowing fully warranted but false beliefs. (shrink)
CAN VIRTUE EPISTEMOLOGY SOLVE THE GETTIER PROBLEM? The aim of this paper is to investigate if ideas developed by philosophers representing the current called Virtue Epistemology are able to resolve the Gettier problem. First of all, I am going to remind what classical concept of knowledge as justified true belief consists in, then I present co-called Gettier cases that are counterexamples to the classical idea of knowledge. Then I investigate how the idea of evaluating beliefs formulated by Ernest (...) Sosa is able to deal with hard cases made by Gettier, but also Chisholm and Goldman. I argue that Sosa’s conception could be viewed as satisfactory analysis of knowledge, if we slightly modify it to accommodate Goldman’s case. (shrink)
This was published in Cultural Critique (Winter 1991-92), pp. 5-32; revised and reprinted in Who Can Speak? Authority and Critical Identity edited by Judith Roof and Robyn Wiegman, University of Illinois Press, 1996; and in Feminist Nightmares: Women at Odds edited by Susan Weisser and Jennifer Fleischner, (New York: New York University Press, 1994); and also in Racism and Sexism: Differences and Connections eds. David Blumenfeld and Linda Bell, Rowman and Littlefield, 1995.
This paper is part of a symposium on Linda Zagzebski's EPISTEMIC AUTHORITY (OUP, 2012). It focuses on Zagzebski's argument that the transmission of information through a chain of testimony weakens its evidential value. This argument is shown to rest on an overly simplistic model of testimonial transmission that does not apply to religious traditions. The real problem with modeling religious traditions just as transmitters of information is that this assumes a conception of religious knowledge that is too "insular" (...) with respect to other things the believer knows, as well as aspects of religious faith that go beyond the mere acceptance of doctrines. (shrink)
Consider the following true stories: 1. Anne Cameron, a very gifted white Canadian author, writes several first person accounts of the lives of Native Canadian women. At the 1988 International Feminist Book Fair in Montreal, a group of Native Canadian writers ask Cameron to, in their words, "move over" on the grounds that her writings are disempowering for Native authors. She agrees. 2 2. After the 1989 elections in Panama are overturned by Manuel Noriega, U.S. President George Bush declares in (...) a public address that Noriega's actions constitute an "outrageous fraud" and that "the voice of the Panamanian people have spoken." "The Panamanian people," he tells us, "want democracy and not tyranny, and want Noriega out." He proceeds to plan the invasion of Panama. 3. At a recent symposium at my university, a prestigious theorist was invited to give a lecture on the political problems of post-modernism. Those of us in the audience, including many white women and people of oppressed nationalities and races, wait in eager anticipation for what he has to contribute to this important discussion. To our disappointment, he introduces his lecture by explaining that he can not cover the assigned topic, because as a white male he does not feel that he can speak for the feminist and post-colonial perspectives which have launched the critical interrogation of postmodernism's politics. He lectures instead on architecture. These examples demonstrate the range of current practices of speaking for others in our society. While the prerogative of speaking for others remains unquestioned in the citadels of colonial administration, among activists and in the academy it elicits a growing unease and, in some communities of discourse, it is being rejected. There is a strong, albeit contested, current within feminism which holds that speaking for others---even for other women---is arrogant, vain, unethical, and politically illegitimate.. (shrink)
InBeing and Nothingness, Jean-Paul Sartre affirms a circle of relations between oneself and another. This circle moves between the relations of love and desire and results from the fact that both love and desire are attempts to capture the other who always remains out of reach. Sartre denies that there can be a dialectic of such relations with others: never can there be a motivated movement beyond the frustrations and failures of each of these attempts to relate to the other. (...) The only way out of this circle is, therefore, according to Sartre, a radical conversion.Like the master in Hegel'sPhenomenology of Mind, each individual caught in this circle wants what cannot be attained: the assimilation or the negation of the freedom of the other. He is thus, like Hegel's master, impervious to any reasons that could count against what he is seeking; his failures cannot in any way motivate him to want what can be. From the point of view of such desires, any negative evaluation of these desires must seem arbitrary. Therefore, to the extent that Sartre's earlier writings indicate no other possibilities of human existence except those premised on such impossible demands, Sartre's negative evaluations concerning the bad faith of these individuals must seem arbitrary.My conclusion is not, however, simply negative since I argue that inSaint Genet Sartre presents Genet's life as a dialectical movement beyond failure to triumph. This is not a dialectic of bad faith. Rather it is a dialectic based on a very different desire from the desire for what cannot be. If Sartre thus develops another level, another fundamental desire, from which the level of bad faith can be judged to be wrong, then at least from this level the judgment is not a merely arbitrary one. (shrink)
Widely regarded as one of the foremost figures in contemporary philosophy of religion, this book by Linda Zagzebski is a major contribution to ethical theory and theological ethics. At the core of the book lies a form of virtue theory based on the emotions. Quite distinct from deontological, consequentialist and teleological virtue theories, this one has a particular theological, indeed Christian, foundation. The theory helps to resolve philosophical problems and puzzles of various kinds: the dispute between cognitivism and non-cognitivism (...) in moral psychology, the claims and counterclaims of realism and anti-realism in the metaphysics of value, and paradoxes of perfect goodness in natural theology, including the problem of evil. As with Zagzebski's previous Cambridge book Virtues of the Mind, this book will be sought out eagerly by a broad swathe of professionals and graduate students in philosophy and religious studies. (shrink)
Individual differences on a variety of framing and conjunction problems were examined in light of Slovic and Tversky's (1974) understanding/acceptance principle-that more reflective and skilled reasoners are more likely to affirm the axioms that define normative reasoning and to endorse the task construals of informed experts. The predictions derived from the principle were confirmed for the much discussed framing effect in the Disease Problem and for the conjunction fallacy on the LindaProblem. Subjects of higher cognitive ability (...) were disproportionately likely to avoid each fallacy. Other framing problems produced much more modest levels of empirical support. It is conjectured that the varying patterns of individual differences are best explained by two-process theories of reasoning (e.g. Evans, 1984, 1996; Sloman, 1996) conjoined with the assumption that the two processes differentially reflect interactional and analytic intelligence. (shrink)
Wann ist Kohärenz ein Indiz für Wahrheit? Können Informationsmengen immer entsprechend ihrer Kohärenz geordnet werden? Welche Rolle spielt Kohärenz bei der Theoriewahl in der Wissenschaft? Unter welchen Umständen kann eine wissenschaftliche Theorie mit nur teilweise zuverlässigen Messinstrumenten bestätigt werden? Ist die Belegvielfaltsthese wahr? Warum sind übereinstimmende Aussagen unabhängiger Zeugen so gewichtig? Dies sind einige der Fragen, die in diesem Buch in einem wahrscheinlichkeitstheoretischen Kontext und auf der Grundlage konkreter Modelle behandelt werden. Darüber hinaus bietet das Buch eine elementare Einführung in (...) die Theorie der Bayesianischen Netzwerke und zeigt Verbindungen auf zu Debatten in der Kognitionswissenschaft (Tversky und Kahnemans LindaProblem), Sozialwahltheorie (Condorcet Jury Theorem) und Informatik (Repräsentation von Glaubenssystemen). Das Buch richtet sich an alle, die sich für die Anwendung wahrscheinlichkeitstheoretischer Methoden in der Philosophie interessieren. (shrink)
The “demarcation problem,” the issue of how to separate science from pseu- doscience, has been around since fall 1919—at least according to Karl Pop- per’s (1957) recollection of when he first started thinking about it. In Popper’s mind, the demarcation problem was intimately linked with one of the most vexing issues in philosophy of science, David Hume’s problem of induction (Vickers 2010) and, in particular, Hume’s contention that induction cannot be logically justified by appealing to the fact (...) that “it works,” as that in itself is an inductive argument, thereby potentially plunging the philosopher straight into the abyss of a viciously circular argument. (shrink)
Ever since Socrates, philosophers have been in the business of asking ques- tions of the type “What is X?” The point has not always been to actually find out what X is, but rather to explore how we think about X, to bring up to the surface wrong ways of thinking about it, and hopefully in the process to achieve an increasingly better understanding of the matter at hand. In the early part of the twentieth century one of the most (...) ambitious philosophers of sci- ence, Karl Popper, asked that very question in the specific case in which X = science. Popper termed this the “demarcation problem,” the quest for what distinguishes science from nonscience and pseudoscience (and, presumably, also the latter two from each other). (shrink)
One of the reasons why most of us feel puzzled about the problem of abortion is that we want, and do not want, to allow to the unborn child the rights that belong to adults and children. When we think of a baby about to be born it seems absurd to think that the next few minutes or even hours could make so radical a difference to its status; yet as we go back in the life of the fetus (...) we are more and more reluctant to say that this is a human being and must be treated as such. No doubt this is the deepest source of our dilemma, but it is not the only one. For we are also confused about the general question of what we may and may not do where the interests of human beings conflict. We have strong intuitions about certain cases; saying, for instance, that it is all right to raise the level of education in our country, though statistics allow us to predict that a rise in the suicide rate will follow, while it is not all right to kill the feeble-minded to aid cancer research. It is not easy, however, to see the principles involved, and one way of throwing light on the abortion issue will be by setting up parallels involving adults or children once born. So we will be able to isolate the “equal rights” issue and should be able to make some advance... (shrink)
J.L. Mackie’s version of the logical problem of evil is a failure, as even he came to recognize. Contrary to current mythology, however, its failure was not established by Alvin Plantinga’s Free Will Defense. That’s because a defense is successful only if it is not reasonable to refrain from believing any of the claims that constitute it, but it is reasonable to refrain from believing the central claim of Plantinga’s Free Will Defense, namely the claim that, possibly, every essence (...) suffers from transworld depravity. (shrink)
My primary aim is to defend a nonreductive solution to the problem of action. I argue that when you are performing an overt bodily action, you are playing an irreducible causal role in bringing about, sustaining, and controlling the movements of your body, a causal role best understood as an instance of agent causation. Thus, the solution that I defend employs a notion of agent causation, though emphatically not in defence of an account of free will, as most theories (...) of agent causation are. Rather, I argue that the notion of agent causation introduced here best explains how it is that you are making your body move during an action, thereby providing a satisfactory solution to the problem of action. (shrink)
I resolve the major challenge to an Expressivist theory of the meaning of normative discourse: the Frege–Geach Problem. Drawing on considerations from the semantics of directive language (e.g., imperatives), I argue that, although certain forms of Expressivism (like Gibbard’s) do run into at least one version of the Problem, it is reasonably clear that there is a version of Expressivism that does not.
The philosophical study of consciousness is chock full of thought experiments: John Searle’s Chinese Room, David Chalmers’ Philosophical Zombies, Frank Jackson’s Mary’s Room, and Thomas Nagel’s ‘What is it like to be a bat?’ among others. Many of these experiments and the endless discussions that follow them are predicated on what Chalmers famously referred as the ‘hard’ problem of consciousness: for him, it is ‘easy’ to figure out how the brain is capable of perception, information integration, attention, reporting on (...) mental states, etc, even though this is far from being accomplished at the moment. What is ‘hard’, claims the man of the p-zombies, is to account for phenomenal experience, or what philosophers usually call ‘qualia’: the ‘what is it like’, first-person quality of consciousness. (shrink)
A popular form of virtue epistemology—defended by such figures as Ernest Sosa, Linda Zagzebski and John Greco—holds that knowledge can be exclusively understood in virtue-theoretic terms. In particular, it holds that there isn't any need for an additional epistemic condition to deal with the problem posed by knowledge-undermining epistemic luck. It is argued that the sustainability of such a proposal is called into question by the possibility of epistemic twin earth cases. In particular, it is argued that such (...) cases demonstrate the need for virtue-theoretic accounts of knowledge to appeal to an independent epistemic condition which excludes knowledge-undermining epistemic luck. (shrink)
Here I discuss some theistic responses to the problem of animal pain and suffering with special attention to Michael Murray’s presentation in Nature Red in Tooth and Claw. The neo-Cartesian defenses he describes are reviewed, along with the appeal to nomic regularity and Murray’s emphasis on the progression of the universe from chaos to order. It is argued that despite these efforts to prove otherwise the problem of animal suffering remains a serious threat to the belief that an (...) all-powerful, all-knowing, and all-good creator exists. (shrink)
In this paper, I argue that there is a kind of evil, namely, the unequal distribution of natural endowments, or natural inequality, which presents theists with a new evidential problem of evil. The problem of natural inequality is a new evidential problem of evil not only because, to the best of my knowledge, it has not yet been discussed in the literature, but also because available theodicies, such the free will defense and the soul-making defense, are not (...) adequate responses in the face of this particular evil, or so I argue. (shrink)
Philosophers and cognitive scientists have worried that research on animal mind-reading faces a ‘logical problem’: the difficulty of experimentally determining whether animals represent mental states (e.g. seeing) or merely the observable evidence (e.g. line-of-gaze) for those mental states. The most impressive attempt to confront this problem has been mounted recently by Robert Lurz. However, Lurz' approach faces its own logical problem, revealing this challenge to be a special case of the more general problem of distal content. (...) Moreover, participants in this debate do not agree on criteria for representation. As such, future debate should either abandon the representational idiom or confront underlying semantic disagreements. (shrink)
Many current popular views in epistemology require a belief to be the result of a reliable process (aka ‘method of belief formation’ or ‘cognitive capacity’) in order to count as knowledge. This means that the generality problem rears its head, i.e. the kind of process in question has to be spelt out, and this looks difficult to do without being either over or under-general. In response to this problem, I propose that we should adopt a more fine-grained account (...) of the epistemic basing relation, at which point the generality problem becomes easy to solve. (shrink)
Moral non-cognitivists hope to explain the nature of moral agreement and disagreement as agreement and disagreement in non-cognitive attitudes. In doing so, they take on the task of identifying the relevant attitudes, distinguishing the non-cognitive attitudes corresponding to judgements of moral wrongness, for example, from attitudes involved in aesthetic disapproval or the sports fan’s disapproval of her team’s performance. We begin this paper by showing that there is a simple recipe for generating apparent counterexamples to any informative specification of the (...) moral attitudes. This may appear to be a lethal objection to non-cognitivism, but a similar recipe challenges attempts by non-cognitivism’s competitors to specify the conditions underwriting the contrast between genuine and merely apparent moral disagreement. Because of its generality, this specification problem requires a systematic response, which, we argue, is most easily available for the non-cognitivist. Building on premisses congenial to the non-cognitivist tradition, we make the following claims: (1) In paradigmatic cases, wrongness-judgements constitute a certain complex but functionally unified state, and paradigmatic wrongness-judgements form a functional kind, preserved by homeostatic mechanisms. (2) Because of the practical function of such judgements, we should expect judges’ intuitive understanding of agreement and disagreement to be accommodating, treating states departing from the paradigm in various ways as wrongness-judgements. (3) This explains the intuitive judgements required by the counterexample-generating recipe, and more generally why various kinds of amoralists are seen as making genuine wrongness-judgements. (shrink)
The reference class problem arises when we want to assign a probability to a proposition (or sentence, or event) X, which may be classified in various ways, yet its probability can change depending on how it is classified. The problem is usually regarded as one specifically for the frequentist interpretation of probability and is often considered fatal to it. I argue that versions of the classical, logical, propensity and subjectivist interpretations also fall prey to their own variants of (...) the reference class problem. Other versions of these interpretations apparently evade the problem. But I contend that they are all “no-theory” theories of probability - accounts that leave quite obscure why probability should function as a guide to life, a suitable basis for rational inference and action. The reference class problem besets those theories that are genuinely informative and that plausibly constrain our inductive reasonings and decisions. I distinguish a “metaphysical” and an “epistemological” reference class problem. I submit that we can dissolve the former problem by recognizing that probability is fundamentally a two-place notion: conditional probability is the proper primitive of probability theory. However, I concede that the epistemological problem remains. (shrink)
Barnett and Block (J Bus Ethics 18(2):179–194, 2011 ) argue that one cannot distinguish between deposits and loans due to the continuum problem of maturities and because future goods do not exist—both essential characteristics that distinguish deposit from loan contracts. In a similar way but leading to opposite conclusions (Cachanosky, forthcoming) maintains that both maturity mismatching and fractional reserve banking are ethically justified as these contracts are equivalent. We argue herein that the economic and legal differences between genuine deposit (...) and loan contracts are clear. This implies different legal obligations for these contracts, a necessary step in assessing the ethics of both fractional reserve banking and maturity mismatching. While the former is economically, legally, and perhaps most importantly ethically problematic, there are no such troubles with the latter. (shrink)
As anyone who has flown out of a cloud knows, the boundaries of a cloud are a lot less sharp up close than they can appear on the ground. Even when it seems clearly true that there is one, sharply bounded, cloud up there, really there are thousands of water droplets that are neither determinately part of the cloud, nor determinately outside it. Consider any object that consists of the core of the cloud, plus an arbitrary selection of these droplets. (...) It will look like a cloud, and circumstances permitting rain like a cloud, and generally has as good a claim to be a cloud as any other object in that part of the sky. But we cannot say every such object is a cloud, else there would be millions of clouds where it seemed like there was one. And what holds for clouds holds for anything whose boundaries look less clear the closer you look at it. And that includes just about every kind of object we normally think about, including humans. Although this seems to be a merely technical puzzle, even a triviality, a surprising range of proposed solutions has emerged, many of them mutually inconsistent. It is not even settled whether a solution should come from metaphysics, or from philosophy of language, or from logic. Here we survey the options, and provide several links to the many topics related to the Problem. (shrink)
I extend my direct virtue epistemology to explain how a knowledge-first framework can account for two kinds of positive epistemic standing, one tracked by externalists, who claim that the virtuous duplicate lacks justification, the other tracked by internalists, who claim that the virtuous duplicate has justification, and moreover that such justification is not enjoyed by the vicious duplicate. It also explains what these kinds of epistemic standing have to do with each other. I argue that all justified beliefs are good (...) candidates for knowledge, and are such because they are exercises of competences to know. However, there are two importantly different senses in which a belief may be a good candidate for knowledge, one corresponding to an externalist kind of justification and the other corresponding to an internalist one. I show how the account solves the new evil demon problem in a more satisfactory way than existing accounts. We end up with a view of knowledge, justification, and rationality that is plausible, motivated, and theoretically unified. (shrink)
Explaining the mind by building machines with minds runs into the other-minds problem: How can we tell whether any body other than our own has a mind when the only way to know is by being the other body? In practice we all use some form of Turing Test: If it can do everything a body with a mind can do such that we can't tell them apart, we have no basis for doubting it has a mind. But what (...) is "everything" a body with a mind can do? Turing's original "pen-pal" version (the TT) only tested linguistic capacity, but Searle has shown that a mindless symbol-manipulator could pass the TT undetected. The Total Turing Test (TTT) calls for all of our linguistic and robotic capacities; immune to Searle's argument, it suggests how to ground a symbol manipulating system in the capacity to pick out the objects its symbols refer to. No Turing Test, however, can guarantee that a body has a mind. Worse, nothing in the explanation of its successful performance requires a model to have a mind at all. Minds are hence very different from the unobservables of physics (e.g., superstrings); and Turing Testing, though essential for machine-modeling the mind, can really only yield an explanation of the body. (shrink)
This paper explores the relationship between scepticism and epistemic relativism in the context of recent history and philosophy of science. More specifically, it seeks to show that significant treatments of epistemic relativism by influential figures in the history and philosophy of science draw upon the Pyrrhonian problem of the criterion. The paper begins with a presentation of the problem of the criterion as it occurs in the work of Sextus Empiricus. It is then shown that significant treatments of (...) epistemic relativism in recent history and philosophy of science (critical rationalism, historical philosophy of science and the strong programme) draw upon the problem of the criterion. It is briefly suggested that a particularist response to the problem of the criterion may be put to good use against epistemic relativism. (shrink)
Expressivists, such as Blackburn, analyse sentences such as 'S thinks that it ought to be the case that p' as S hoorays that p'. A problem is that the former sentence can be negated in three different ways, but the latter in only two. The distinction between refusing to accept a moral judgement and accepting its negation therefore cannot be accounted for. This is shown to undermine Blackburn's solution to the Frege-Geach problem.
This paper proposes a view on epistemic relativism that arises from the problem of the criterion, keeping in consideration that the assessment of criterion standards always occurs in a certain context. The main idea is that the epistemic value of the assertion “S knows that p” depends not only on the criterion adopted within an epistemic framework and the relationship between said criterion and a meta-criterion, but also from the collaboration with other subjects who share the same standards. Thus, (...) one can choose between particularist and methodist criteria according to the context of assessment. This position has the advantage of presenting a new perspective concerning both the criterion problem and the problem of inter-contextuality in the evaluation of different epistemic frameworks. (shrink)
In opposition to mainstream theory of mind approaches, some contemporary perceptual accounts of social cognition do not consider the central question of social cognition to be the problem of access to other minds. These perceptual accounts draw heavily on phenomenological philosophy and propose that others' mental states are “directly” given in the perception of the others' expressive behavior. Furthermore, these accounts contend that phenomenological insights into the nature of social perception lead to the dissolution of the access problem. (...) We argue, on the contrary, that the access problem is a genuine problem that must be addressed by any account of social cognition, perceptual or non-perceptual, because we cannot cast the access problem as a false problem without violating certain fundamental intuitions about other minds. We elaborate the fundamental intuitions as three constraints on any theory of social perception: the Immediacy constraint; the Transcendence constraint; and the Accessibility constraint. We conclude with an outline of an account of perceiving other minds that meets the three constraints. (shrink)
Engineering ethics entails three frames of reference: individual, professional, and social. “Microethics” considers individuals and internal relations of the engineering profession; “macroethics” applies to the collective social responsibility of the profession and to societal decisions about technology. Most research and teaching in engineering ethics, including online resources, has had a “micro” focus. Mechanisms for incorporating macroethical perspectives include: integrating engineering ethics and science, technology and society (STS); closer integration of engineering ethics and computer ethics; and consideration of the influence of (...) professional engineering societies and corporate social responsiblity programs on ethical engineering practice. Integrating macroethical issues and concerns in engineering ethics involves broadening the context of ethical problem solving. This in turn implies: developing courses emphasizing both micro and macro perspectives, providing faculty development that includes training in both STS and practical ethics; and revision of curriculum materials, including online resources. Multidisciplinary collaboration is recommended 1) to create online case studies emphasizing ethical decision making in individual, professional, and societal contexts; 2) to leverage existing online computer ethics resources with relevance to engineering education and practice; and 3) to create transparent linkages between public policy positions advocated by professional societies and codes of ethics. (shrink)
In his paper The Opposite of Human Enhancement: Nanotechnology and the Blind Chicken problem (Nanoethics 2:305–316, 2008) Paul Thompson argues that the possibility of disenhancing animals in order to improve animal welfare poses a philosophical conundrum. Although many people intuitively think such disenhancement would be morally impermissible, it’s difficult to find good arguments to support such intuitions. In this brief response to Thompson, I accept that there’s a conundrum here. But I argue that if we seriously consider whether creating (...) beings can harm or benefit them, and introduce the non-identity problem to discussions of animal disehancement, the conundrum is even deeper than Thompson suggests. (shrink)
In this paper, it is argued that there are (at least) two different kinds of ‘epistemic normativity’ in epistemology, which can be scrutinized and revealed by some comparison with some naturalistic studies of ethics. The first kind of epistemic normativity can be naturalized, but the other not. The doctrines of Quine’s naturalized epistemology is firstly introduced; then Kim’s critique of Quine’s proposal is examined. It is argued that Quine’s naturalized epistemology is able to save some room for the concept of (...) epistemic normativity and therefore his doctrine can be protected against Kim’s critique. But, it is the first kind of epistemic normativity that can be naturalized in epistemology. With the assistance of Goldman’s fake barn case, it is shown that the concept of epistemic normativity that is involved in the concept of knowing, which cannot be fully naturalized. The Gettier problem indicates that Quine only gets partially right idea concerning whether epistemology can (and should) be natualized. (shrink)
A difficulty is exposed in Allan Gibbard's solution to the embedding/Frege-Geach problem, namely that the difference between refusing to accept a normative judgement and accepting its negation is ignored. This is shown to undermine the whole solution.