Tamar Gendler argues that, for those living in a society in which race is a salient sociological feature, it is impossible to be fully rational: members of such a society must either fail to encode relevant information containing race, or suffer epistemic costs by being implicitly racist. However, I argue that, although Gendler calls attention to a pitfall worthy of study, she fails to conclusively demonstrate that there are epistemic (or cognitive) costs of being racist. Gendler offers three supporting phenomenon. (...) First, implicit racists expend cognitive energy repressing their implicit biases. I reply, citing Ellen Bialystok’s research, that constant use of executive functioning can be beneficial. Second, Gendler argues that awareness of a negative stereotype of one’s own race with regard to a given task negatively affects one’s performance of that task. This phenomenon, I argue, demonstrates that those against whom the stigma is directed suffer costs, but it fails to demonstrate that the stigmatizers suffer cognitively. Finally, Gendler argues that racists are less competent when recognizing faces of other races than when recognizing faces of their own race because, in the first instance, they encode the race of the face (taking up cognitive space that could have been used to encode fine-grained distinctions), whereas in the second instance they encode no race. I argue that in-group/out-group categorization rather than racism is the cognitive cost. I conclude that Gendler has failed to demonstrate that there are cognitive costs associated with being a racist. (shrink)
What is the relationship between the permissibility/impermissibility of the part and the permissibility/impermissibility of the whole? Does the moral or legal status of a constituent part of an actor’s course of conduct govern the status of the actor’s whole course of conduct or, conversely, does the moral and legal status of the actor’s whole course of conduct govern the status of the constituent parts? This broader issue is examined in the more specific contexts of the contrived defense and deterrent threat (...) doctrines. The latter doctrine concerns whether a prima facie impermissible act of carrying out a threatened action may be rendered permissible if embedded within an overall permissible course of action including the issuance of a deterrent threat that fails to induce compliance. The contrived defense doctrine addresses the permissibility of an actor who contrives or culpably causes the conditions of her own defense. This essay considers the claim—advanced by Claire Finkelstein and Leo Katz—that the contrived defense and deterrent threat doctrines are sufficiently related such that the preferable approach to each doctrine informs and supports the preferable approach to the other. In each, the permissible/impermissible status of the whole governs the status of the part. Regarding contrived defenses, the impermissibility of the actor’s whole course of conduct renders the otherwise permissible constituent part relating to the defense also impermissible. And regarding deterrent threats, the permissibility of the actor’s whole course of conduct renders the otherwise impermissible constituent parts also permissible. This essay challenges the claimed linkage between the contrived defense and deterrent threat doctrines by proposing hypothetical situations in which the claimed parallel doctrines collapse into each other. As a result, the application of the preferred approaches to each doctrine generates a contradiction. (shrink)
Kant's theory of punishment is commonly regarded as purely retributive in nature, and indeed much of his discourse seems to support that interpretation. Still, it leaves one with certain misgivings regarding the internal consistency of his position. Perhaps the problem lies not in Kant's inconsistency nor in the senility sometimes claimed to be apparent in the Metaphysic of Morals, but rather in a superimposed, modern yet monistic view of punishment. Historical considerations tend to show that Kant was discussing not one, (...) but rather two facets of punishment, each independent but nevertheless mutually restrictive. Punishment as a threat was intended to deter crime. It was a tool in the hands of civil society to counteract human drives toward violating another's rights. In its execution, however, the state was limited in its reaction by a retributive theory of justice demanding respect for the individual as an end and not as a means to some further social goal. This interpretation of Kant's theory of punishment maintains consistency from the earliest through the latest of his writings on moral, legal, and political philosophy. It provides a good reason for rejecting current economic analyses of crime and punishment. Most important of all, it credits Kant's theory in its clear recognition of the ideals intrinsic to libertarian government. (shrink)
In the first part of the paper I reconstruct Kant’s proof of the existence of a ‘most real being’ while also highlighting the theory of modality that motivates Kant’s departure from Leibniz’s version of the proof. I go on to argue that it is precisely this departure that makes the being that falls out of the pre-critical proof look more like Spinoza’s extended natura naturans than an independent, personal creator-God. In the critical period, Kant seems to think that transcendental idealism (...) allows him to avoid this conclusion, but in the last section of the paper I argue that there is still one important version of the Spinozistic threat that remains. (shrink)
This paper considers the connection between automaticity, control and agency. Indeed, recent philosophical and psychological works play up the incompatibility of automaticity and agency. Specifically, there is a threat of automaticity, for automaticity eliminates agency. Such conclusions stem from a tension between two thoughts: that automaticity pervades agency and yet automaticity rules out control. I provide an analysis of the notions of automaticity and control that maintains a simple connection: automaticity entails the absence of control. An appropriate analysis, however, shows (...) that actions are forms of control and pervasively automatic even if automaticity implies the absence of control. Consequences are drawn for the theory of mental agency and the psychological concepts of automaticity and control. (shrink)
In the early 1970s Harry Frankfurt argued that so-called 'coercive threats' cause a violation of their victim's autonomy, thereby excluding him from moral responsibility. A person is therefore not responsible for doing what he is forced to do. Although this seems correct on an intuitive level, I will use Frankfurt's later vocabulary of 'care' and 'love' in order to show that threats essentially involve an abuse of a person's autonomy instead of an infringement or violation thereof. Still, if we want (...) to understand the sense of reluctance that is involved in acting under threat, as well as the sense of responsibility that befalls both the victim as well as the perpetrator, then we have to move beyond the Frankfurtian framework. (shrink)
In this doctoral dissertation I consider, and reject, the claim that recent varieties of non-reductive physicalism, particularly Donald Davidson's anomalous monism, are committed to a new kind of epiphenomenalism. Non-reductive physicalists identify each mental event with a physical event, and are thus entitled to the belief that mental events are causes, since the physical events with which they are held to be identical are causes. However, Jaegwon Kim, Ernest Sosa and others have argued that if we follow the non-reductive physicalist (...) in denying that mental features can be reduced to physical properties, then we must regard mental properties as being causally irrelevant to their bearers' effects, In short, the non-reductive physicalist is said to be committed to the belief that while there are mental causes, they do not cause their effects in virtue of being the types of mental state that they are. It is in this sense that non-reductive physicalists are thought to represent a new form of epiphenomenalism. After a brief survey of the history of epiphenomenalism, and its mutation into the contemporary strain that is believed to afflict non-reductive physicalism, 1 argue against the counterfactual criterion of the sort of causal relevance that we take mental features to enjoy. I then criticize the 'trope' response to the epiphenomenalist threat, and conclude that much of the current debate on this topic is premised on the mistaken belief that there is some variety of causal relevance that is not simply a brand of explanatory relevance. Once this is seen, it will seem much less plausible that mental properties are excluded from relevance to the phenomena of which we typically take them to be explanatory. (shrink)
The evolutionary theory of threat simulation during dreaming indicates that themes appropriate to ancestral survival concerns (threats) should be disproportionately represented in dreams. Our studies of typical dream themes in students and sleep-disordered patients indicate that threatening dreams involving chase and pursuit are indeed among the three most prevalent themes, thus supporting Revonsuo's theory. However, many of the most prevalent themes are of positive, not negative, events (e.g., sex, flying) and of current, not ancestral, threat scenarios (e.g., schoolwork). Moreover, many (...) clearly ancestral themes (e.g., snakes, earthquakes) are not prevalent at all in dreams. Thus, these findings challenge the specificity of the threat simulation theory. [Revonsuo]. (shrink)
Revonsuo argues that the biological function of dreaming is to simulate threatening events and to rehearse threat avoidance behaviors. He views recurrent dreams as an example of this function. We present data and clinical observations suggesting that (1) many types of recurrent dreams do not include threat perceptions; (2) the nature of the threat perceptions that do occur in recurrent dreams are not always realistic; and (3) successful avoidance responses are absent from most recurrent dreams and possibly nightmares. [Hobson et (...) al.; Revonsuo]. (shrink)
Israel has a long history of concern with chemical and biological threats, since several hostile states in the Middle East are likely to possess such weapons. The Twin-Tower terrorist attacks and Anthrax envelope scares of 2001 were a watershed for public perceptions of the threat of unconventional terror in general and of biological terror in particular. New advances in biotechnology will only increase the ability of terrorists to exploit the burgeoning availability of related information to develop ever-more destructive bioweapons. Many (...) areas of modern biological research are unavoidably dual-use by nature. They thus have a great potential for both help and harm; and facilitating the former while preventing the latter remains a serious challenge to researchers and governments alike. This article addresses how Israel might best (1) prevent hostile elements from obtaining, from Israel’s biological research system, materials, information and technologies that might facilitate their carrying out a biological attack, while (2) continuing to promote academic openness, excellence and other hallmarks of that system. This important and sensitive issue was assessed by a special national committee, and their recommendations are presented and discussed. One particularly innovative element is the restructuring and use of Israel’s extensive biosafety system to also address biosecurity goals, with minimal disruption or delay. (shrink)
Is drastic action against global warming essential to avoid impoverishing our descendants? Or does it mean robbing the poor to give to the rich? We do not yet know. Yet most of us can agree on the importance of minimising expected deprivation. Because of the vast number of future generations, if there is any significant risk of catastrophe, this implies drastic and expensive carbon abatement unless we discount the future. I argue that we should not discount. Instead, the rich countries (...) should stump up the funds to support abatement both for themselves and the poor states of the world. Yet to ask the present generation to assume all the costs of drastic mitigation.is unfair.Worse still, it is politically unrealistic.We can square the circle by shifting part of the burden to our descendants. Even if we divert investment from other parts of the economy or increase public debt, future people should be richer, so long as we avert catastrophe. If so, it is fair for them to assume much of the cost of abatement.What we must not do is to expose them to the threat of disaster by not doing enough. (shrink)
Generally speaking, just war theory (JWT) holds that there are two just causes for war: self-defence and ‘other-defence’. The most common type of the latter is popularly known as ‘humanitarian intervention’. There is debate, however, as to whether these can serve as just causes for preventive war. Those who subscribe to JWT tend to be unified in treating so-called preventive war with a high degree of suspicion on the grounds that it fails to satisfy conventional criteria for jus ad bello; (...) – particularly the just cause and last resort criteria. Francisco di Vitoria held that the only just cause for war was ‘a wrong received’, which renders impossible any justification for preventive war. There are assumptions implicit in recent military practice, however – most notably, the US-led invasion of Iraq in 2003 – that challenge this ban on preventive war. Interestingly, both supporters and critics attempt to justify their views through the broader logic of JWT; viz., through a conception of what is good for both political communities and individuals, and through a legitimate defence of these goods. Supporters point to situations where so-called rogue states represent ‘grave and imminent risk’ of committing acts of aggression as grounds that justify preventive war; critics argue that to attack another political community on the basis of crimes not yet committed is a breach of the very rights JWT was created to defend. The advocate of preventive war does not appreciate important aspects concerning the morality of war. In the ongoing tension between Iran and The United States and her allies – if the rhetoric is to be believed – I am asked to tolerate a threat to my security and liberty, and to risk suffering aggression in defence of the rights of the antagonistic, but not yet aggressive, state. The crucial question is how such tolerance and risk fit in with the logic of just war: at what point, if any, does the risk of being attacked become great enough to justify declaring war in anticipation? In this paper I highlight some of the theoretical and practical difficulties in determining what counts as a grave and imminent threat, focusing especially on the complicated case of ‘imminence’ in the face of so-called ‘Weapons of Mass Destruction’. Secondly, I will argue that not only is the notion of preventive war inconsistent with the defence of the rights of political communities that JWT requires; it is also forbidden by the proportionality requirement of jus ad bellum. A risk of being subjected to aggression is the price for global peace. Whilst political communities can do much to prevent aggression and prepare themselves in case it occurs, the conditions for just war require that this prevention and preparation stop short of declaring war. We must live with a certain degree of risk in this area. (shrink)
Many with schizophrenia find social interactions a profound and terrifying threat to their sense of self. To better understand this we draw upon dialogical models of the self that suggest that those with schizophrenia have difficulty sustaining dialogues among diverse aspects of self. Because interpersonal exchanges solicit and evoke movement among diverse aspects of self, many with schizophrenia may consequently find those exchanges overwhelming, resulting in despair, the sensation of fusion with another, and/or self-dissolution. In short, compromised dialogical capacities may (...) be a contributing factor to social dysfunction in schizophrenia. (shrink)
There is evidence that people with schizophrenia have difficulties in some (recently evolved) competencies for processing social information. However, a case can be made that vulnerabilities can also lie in (previously evolved) threat and safeness processing systems. Evolutionary models may need to consider interactions between genetic sensitivities, early experiences of threat/safeness, and later cognitive vulnerabilities. Psychological treatments must address issues of experienced threat and safeness before working on more cognitive competencies.
25 years ago, when AI & Society was launched, the emphasis was, and still is, on dehumanisation and the effects of technology on human life, including reliance on technology. What we forgot to take into account was another very great danger to humans. The pervasiveness of computer technology, without appropriate security safeguards, dehumanises us by allowing criminals to steal not just our money but also our confidential and private data at will. Also, denial-of-service attacks prevent us from accessing the information (...) we need when we want it. We are being dehumanised not by the technology but by criminals who use the ubiquity of the technology and its lack of security to steal from us and prevent us from doing what we want. What is more interesting is that this malevolent use of the technology doesn’t come from monolithic corporate structures eager to control our lives but mainly from individuals keen to demonstrate their knowledge of the technology for social networking purposes. The aim of this paper is to turn the clock back 25 years and present an alternative perspective: the single, biggest threat of dehumanisation is not the pervasiveness and ubiquity of computers but the lack of ensuring that humans are provided with the basic security they need for using the technology safely and securely. Cyberspace is not a safe space to be. This was something that even far-sighted researcher colleagues in the 1970s and 1980s overlooked. The paper will explore where we went wrong 25 years ago in our predictions and concerns. We will also present a scenario that allows future generations to have a safer cyberworld. (shrink)
Dreams represent threat, but appear to do so metaphorically more often than realistically. The metaphoric representation of threat allows it to be conceptualized in a manner that is constant across situations (as what is common to all threats begins to be understood and portrayed). This also means that response to threat can come to be represented in some way that works across situations. Conscious access to dream imagery, and subsequent social communication of that imagery, can facilitate this generalized adaptive process, (...) by allowing the communicative dreamer access to the problem solving resources of the community. [Revonsuo; Solms]. (shrink)
According to Revonsuo, dreams are the output of a evolved “threat simulation mechanism.” The author marshals a diverse and comprehensive array of empirical and theoretical support for this hypothesis. We propose that the hypothesized threat simulation mechanism might be more domain-specific in design than the author implies. To illustrate, we discuss the possible sex-differentiated design of the hypothesized threat simulation mechanism. [Revonsuo].
The series of conversations between Angela Y. Davis and Eduardo Mendieta entitled Abolition Democracy is a powerful investigation of the failed moral imagination of imperial democracies. After examining their discussion of how truncated political discourses enable abuses in both war and imprisonment, I look to the “exceptional” status of war prisons such as at Guantánamo and Abu Ghraib. I argue that domestic prisons, like international war prisons, are means for the paradigmatic functioning of the exception in modern democracy, as described (...) by Giorgio Agamben, and thus constitute no less of an “ultimate carceral threat.” Within the domestic prison, the legal status of inmates is virtually suspended and they are reduced to bare life. I conclude that we may yet share the hopes of Davis and Mendieta for an abolition democracy, and that such a democracy would bear the echoes of the unconditional sovereignty “to come” theorized by Jacques Derrida. (shrink)
In a short and much-neglected passage in the second Critique, Kant discusses the threat posed to human freedom by theological determinism. In this paper we present an interpretation of Kant’s conception of and response to this threat. Regarding his conception, we argue that he addresses two versions of the threat: either God causes appearances (and hence our spatio-temporal actions) directly or he does so indirectly by causing things in themselves which in turn cause appearances. Kant’s response to the first version (...) is that God cannot cause appearances directly because they depend essentially on the passive sensibility of finite beings. Kant’s response to the second version is that human beings are endowed with transcendental freedom, which blocks the causal transitivity that is presupposed by this version. We also contrast his position on this topic with Leibniz’s and Spinoza’s. (shrink)
Neurobiological and cognitive models of unconscious information processing suggest that subconscious threat detection can lead to cognitive misinterpretations and false alarms, while conscious processing is assumed to be perceptually and conceptually accurate and unambiguous. Furthermore, clinical theories suggest that pathological anxiety results from a crude preattentive warning system predominating over more sophisticated and controlled modes of processing. We investigated the hypothesis that subconscious detection of threat in a cognitive task is reflected by enhanced ''false signal'' detection rather than by selectively (...) enhanced discrimination of threat items in 30 patients with panic disorder and 30 healthy controls. We presented a tachistoscopic word-nonword discrimination task and a subsequent recognition task and analyzed the data by means of process-dissociation procedures. In line with our expectations, subjects of both groups showed more false signal detection to threat than to neutral stimuli as indicated by an enhanced response bias, whereas indices of discriminative sensitivity did not show this effect. In addition, patients with panic disorder showed a generally enhanced response bias in comparison to healthy controls. They also seemed to have processed the stimuli less elaborately and less differentially. Results are consistent with the assumption that subconscious threat detection can lead to misrepresentations of stimulus significance and that pathological anxiety is characterized by a hyperactive preattentive alarm system that is insufficiently controlled by higher cognitive processes. (shrink)
This book is remarkable for what it does not do. It purports to be about Peirce's opposition to nominalism, but it never states clearly what nominalism is and says little about Peirce's realist alternative. It contains no historical discussion of nominalism and thus does not explain the relation of Peirce's idiosyncratic use of that term to its original meaning. It ignores the secondary literature on that topic and does not even list Rosa Mayorga's highly relevant 2007 book, From Realism to (...) Realicism [sic], in its Bibliography. Nor, despite nominalism's alleged 'threat,' does it make reference to such important recent nominalists as Nelson Goodman or W.V.O. Quine. Indeed, after page one, there is hardly any .. (shrink)
This essay is a reading of two Hollywood films: The Defiant Ones (1958, directed by Stanley Kramer, starring Tony Curtis and Sidney Poitier) and Rising Sun (1993, directed by Philip Kauffman starring Wesley Snipes and Sean Connery, based on the Michael Crichton novel of the same name). The essay argues that these films work to contain black demand for social and political equality not through exclusionary measures, but rather through deliberate acknowledgment of blackness as integral to US identity. My reading (...) shows how a homosocial bond between white and black stands in for US national identity, and how this identity is unified by foregrounding the threat of an apocalyptic outcome. I use the concept of brinkmanship to illustrate the political effects of this particular narrative form. Then I move to Rising Sun, a film that employs a racial triangle of white, black and Asian men to manage black demand for social change. I argue that the narrative logic and the cultural politics of the film require any figure that is both Asian and masculine to be coded as a foreign enemy. (shrink)
Radio Frequency Identification, or RFID, is a technology which has been receiving considerable attention as of late. It is a fairly simple technology involving radio wave communication between a microchip and an electronic reader, in which an identification number stored on the chip is transmitted and processed; it can frequently be found in inventory tracking and access control systems. In this paper, we examine the current uses of RFID, as well as identifying potential future uses of the technology, including item-level (...) tagging, human implants and RFID-chipped passports, while discussing the impacts that each of these uses could potentially have on personal privacy. Possible guidelines for RFID’s use, including Fair Information Principles and the RFID Bill of Rights are then presented, as well as technological solutions to personal privacy problems, such as tag killing and blocker tags, as well as simple aluminum foil shields for passports. It is then claimed, though, that guidelines and technological solutions will be ineffective for privacy protection, and that legislation will be necessary to guard against the threats posed by the RFID. Finally, we present what we believe to be the most important legislative points that must be addressed. (shrink)
The architecture of the hazard management system underlying precautionary behavior makes functional sense, given the adaptive computational problems it evolved to solve. Many seeming infelicities in its outputs, such as behavior with “apparent lack of rational motivation” or disproportionality, are susceptibilities that derive from the sheer computational difficulty posed by the problem of cost-effectively deploying countermeasures to rare, harmful threats. (Published Online February 8 2007).
Situationists argue that virtue ethics is empirically untenable, since traditional virtue ethicists postulate broad, efficacious character traits, and social psychology suggests that such traits do not exist. I argue that prominent philosophical replies to this challenge do not succeed. But cross-cultural research gives reason to postulate character traits, and this undermines the situationist critique. There is, however, another empirical challenge to virtue ethics that is harder to escape. Character traits are culturally informed, as are our ideals of what traits are (...) virtuous, and our ideals of what qualifies as well-being. If virtues and well-being are culturally constructed ideals, then the standard strategy for grounding the normativity of virtue ethics in human nature is undermined. (shrink)
We are discovering more and more about the human genotypes and about the connections between genotype and behaviour. Do these advances in genetic information threaten our free will? This paper offers a philosopher’s perspective on the question.
In this paper, I examine a new line of response to Frankfurt’s challenge to the traditional association of moral responsibility with the ability to do otherwise. According to this response, Frankfurt’s counterexample strategy fails, not in light of the conditions for moral responsibility per se, but in view of the conditions for action. Specifically, it is claimed, a piece of behavior counts as an action only if it is within the agent’s power to avoid performing it. In so far as (...) Frankfurt’s challenge presupposes that actions can be unavoidable, this view of action seems to bring his challenge up short. Helen Steward and Maria Alvarez have independently proposed versions of this response. Here I argue that this response is unavailable to Frankfurt’s incompatibilist opponents. This becomes evident when we put this question to its proponents: “Are actions that originate deterministically ipso facto unavoidable?” If they answer “yes,” they encounter one horn of a dilemma. If they answer “no,” they encounter the other horn. Since no one has a clearer stake in meeting Frankfurt’s challenge than these theorists do, it is significant that the Steward-Alvarez response is unavailable to them. (shrink)
Many contemporary epistemologists hold that a subject S’s true belief that p counts as knowledge only if S’s belief that p is also, in some important sense, safe. I describe accounts of this safety condition from John Hawthorne, Duncan Pritchard, and Ernest Sosa. There have been three counterexamples to safety proposed in the recent literature, from Comesaña, Neta and Rohrbaugh, and Kelp. I explain why all three proposals fail: each moves fallaciously from the fact that S was at epistemic risk (...) just before forming her belief to the conclusion that S’s belief was formed unsafely. In light of lessons from their failure, I provide a new and successful counterexample to the safety condition on knowledge. It follows, then, that knowledge need not be safe. Safety at a time depends counterfactually on what would likely happen at that time or soon after in a way that knowledge does not. I close by considering one objection concerning higher-order safety. (shrink)
Machine generated contents note: List of abbreviations; Preface; 1. Nominalism as demonic doctrine; 2. Logic, philosophy and the special sciences; 3. Continuity and the problem of universals; 4. Continuity and meaning: Peirce's pragmatic maxim; 5. Logical foundations of Peirce's pragmatic maxim; 6. Experience and its role in inquiry; 7. Scientific method as self-corrective - Peirce's view of the problem of knowledge; 8. The unity of Peirce's theories of truth; 9. Order from chaos: Peirce's evolutionary cosmology; 10. A universe of chance: (...) foundations of Peirce's indeterminism; 11. From inquiry to ethics: the pursuit of truth as moral ideal. (shrink)
In a sense, all technology is biotechnology: machines interacting with human organisms. Technology is designed to overcome the frailties and limitations of human beings in a state of nature -- to make us faster, stronger, longer-lived, smarter, happier. And all technology raises questions about its real contribution to human welfare: are our lives really better for the existence of the automobile, television, nuclear power? These questions are ethical and political, as well as medical; and they even reach to the philosophical (...) and spiritual. On the whole, we seem pretty well adapted to our technology, at least on the face of it -- but there have always been doubts about whether the human soul thrives best in the oppressively technological world we have created for ourselves. (I am continually struck by how much time I have to spend fixing the machines that supposedly improve my life.). (shrink)
In his famous 1950 paper where he presents what became the benchmark for success in artificial intelligence, Turing notes that "at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted" (Turing 1950, 442). Kurzweil (1990) suggests that Turing's prediction was correct, even if no machine has yet to pass the Turing Test. In the wake of the (...) computer revolution, research in artificial intelligence and cognitive science has pushed in the direction of interpreting "thinking" as some sort of computational process. On this understanding, thinking is something computers (in principle) and humans (in practice) can both do. It is difficult to say precisely when in history the meaning of the term "thinking" headed in this direction. Signs are already present in the mechanistic and mathematical tendencies of the early Modern period, and maybe even glimmers are apparent in the ancient Greeks themselves. But over the long haul, we somehow now consider "thinking" as separate from the categories of "thoughtfulness" (in the general sense of wondering about things), "insight" and "wisdom." Intelligent machines are all around us, and the world is populated with smart cars, smart phones and even smart (robotic) appliances. But, though my cell phone might be smart, I do not take that to mean that it is thoughtful, insightful or wise. So, what has become of these latter categories? They seem to be bygones left behind by scientific and computational conceptions of thinking and knowledge that no longer have much use for them. In 2000, Allen, Varner and Zinser addressed the possibility of a Moral Turing Test (MTT) to judge the success of an automated moral agent (AMA), a theme that is repeated in Wallach and Allen (2009).. (shrink)
According to the theory theory of folk psychology, our engagement in the folk psychological practices of prediction, interpretation and explanation draws on a rich body of knowledge about psychological matters. According to the simulation theory, in apparent contrast, a fundamental role is played by our ability to identify with another person in imagination and to replicate or re-enact aspects of the other person’s mental life. But amongst theory theorists, and amongst simulation theorists, there are significant differences of approach.
Some critics of same-sex marriage allege that this kind of union not only betrays the nature of marriage but that it also opens children to various kinds of harm. Same-sex marriage is objectionable, on this view, in its nature and in its effects. A view of marriage as requiring an unassisted capacity to conceive children may be respect as one idea of marriage, but this view need not be understood as marriage itself. It is not clear, in any case, why (...) government should prefer this one idealized view of marriage over other others, so long as recognition of other kinds of marriage do not stand in the way of government carrying out its core interests, such as the protection of children. The idea that children are necessarily harmed when conceived by and for same-sex couples cannot be sustained as a matter of psychological evidence or moral argument. No research shows that such children are routinely harmed or rarely-but-catastrophically. Comparative accounts of the welfare of children of same-sex couples cannot show either that children must be brought into existence only under ideal circumstances. (shrink)
A recent article by Jeff Kochan contains a discussion of modus ponens that among other thing alleges that the paradox of the heap is a counterexample to it. In this note I show that it is the conditional major premise of a modus ponens inference, rather than the rule itself, that is impugned. This premise is the contrapositive of the inductive step in the principle of mathematical induction, confirming the widely accepted view that it is the vagueness of natural language (...) predicates, not modus ponens , that is challenged by Sorites. (shrink)
The appeal of scientific realism is chiefly based on the – staggering – empirical success of the theories currently accepted in science. The realist exhibits some currently accepted scientific theory (the General Theory of Relativity, say), points to its astounding empirical success (with the gravitational redshift, the precession of Mercury’s perihelion, etc) and suggests that it would be monumentally implausible to suppose that the theory could score such empirical successes and yet not reflect, at least to some good approximation, the (...) underlying nature of reality. To hold that combination of beliefs would be, in Poincaré’s celebrated phrase (1905/1952, p. 150), “to attribute an inadmissible role to chance”. (shrink)
The traditional tripartite and tetrapartite analyses describe the conceptual components of propositional knowledge from a universal epistemic point of view. According to the classical analysis, since truth is a necessary condition of knowledge, it does not make sense to talk about “false knowledge” or “knowing wrongly.” There are nonetheless some natural languages in which speakers ordinarily make statements about a person’s knowing a given subject matter wrongly. In this paper, we first provide a brief analysis of “knowing wrongly” in Turkish. (...) Then, taking Allan Hazlett’s recent account of the gap between traditional analyses of knowledge and actual epistemic practices of real cognitive agents as a point of departure, we spell out a non-universalist and non-extensionalist perspective on the value of “knowing wrongly.”. (shrink)
Imagine a world in which as part of their basic substances tomatoes contain fish and tobacco, potatoes contain chicken, moths and other insects, and corn contains fireflies. Is this science-fiction? No, these plant-animal hybrids already exist today and may soon be on your supermarket shelves without any special labeling to warn you. Furthermore, in a few years the types of these genetically engineered "vegetables" are sure to increase and may very possibly also include human genes. If you are a vegetarian, (...) do you want to be in the position of inadvertently eating vegetables that are part meat? Even if you are not a vegetarian, are you ready to become a cannibal and eat foods that are part human being? (shrink)
The empirical evidence often justifies belief in scientific theories. For instance, the great wealth of chemical and other relevant data leaves us with no real alternative to believing that matter is made of atoms. Similarly, the natural history of past and present organisms makes it irrational to deny that life on earth has evolved from a common ancestry. Again, the character and epidemiology of infectious diseases effectively establishes that they are caused by microbes. Peter Lipton did much to illuminate (...) the logic of these and many similar inferences. Often the observed facts admit of only one good theoretical explanation. Rationality therefore dictates that we infer the truth of this explanation. (Lipton 1991/2004.). (shrink)
This article discusses the implications of moral dissonance for managers, and how dissonance induced self justification can create an amplifying feedback loop and downward spiral of immoral behaviour. After addressing the nature of moral dissonance, including the difference between moral and hedonistic dissonance, the writer then focuses on dissonance reduction strategies available to managers such as rationalization, self affirmation, self justification, etc. It is noted that there is a considerable literature which views the organization as a potentially corrupting institution and (...) a source of acute levels of moral dissonance. A simplified process model linking immoral behaviour, dissonance and rationalization is mooted, and some recent theories which question traditional dissonance models, including the free choice paradigm (FCP), are considered. The writer concludes that in the light of the above mentioned critical theories, it may be assumed that the levels of moral dissonance, and the extent of rationalization/self justification amongst managers, are more a function of personality and situational factors than previously assumed. (shrink)
Polygenic effects have more than one cause. They testify to the fact that several causal contributors are sometimes simultaneously involved in causation. The importance of polygenic causation was noticed early on by Mill (1893). It has since been shown to be a problem for causal-law approaches to causation and accounts of causation cast in terms of capacities. However, polygenic causation needs to be examined more thoroughly in the emerging literature on causal mechanisms. In this paper I examine whether an influential (...) theory of mechanisms proposed by Peter Machamer, Lindley Darden and Carl Craver can accommodate polygenic effects and other forms of causal interaction. This theory is problematic, I will argue, because it ascribes a central role to activities. In it, activities are needed not only to constitute mechanisms but also to perform the causal role of mechanisms. Any such mechanism-as-activity will be incompatible with causal situations where either no or merely another kind of activity occurs. But, as I will try to illustrate in this paper, both kinds of situation may be frequent. If I am right, the view that Machamer and colleagues suggest leads to an impoverished conception of mechanism. (shrink)
Although most of us understand and accept that we play different roles in different settings, the moral implications of an unquestioned role-based world are serious. The prevalence of roles at the expense of ‘real’ people in organizations jeopardizes our ability to exercise full moral agency and ascribe moral responsibility, because ‘we were only fulfilling our role obligations’. This reasoning does not sustain ethical scrutiny, however, because individuals are always present behind the role, though they may lack awareness of their ability (...) to choose and act as fully fledged individuals. The article argues that moral responsibility requires us to move away from a role-based life game which leads us to compartmentalize and forget who we are and what we value at a significant cost. On the contrary, an understanding of the process of compartmentalization and a greater awareness of the complex yet holistic nature of the self contribute to furthering moral integrity and responsibility. (shrink)
In this paper I argue that demonstrative induction can deal with the problem ofthe underdetermination of theory by evidence. I present the historical case studyof spectroscopy in the early 1920s, where the choice among different theorieswas apparently underdetermined by spectroscopic evidence concerning the alkalidoublets and their anomalous Zeeman effect. By casting this historical episodewithin the methodological framework of demonstrative induction, the localunderdetermination among Bohr's, Heisenberg's, and Pauli's rival theories isresolved in favour of Pauli's theory of the electron's spin.
Philip Kitcher rejects the global pessimists' view that the conclusions reached in inquiry are determined by the interests of some segment of the population, arguing that only some inquiries, for example, inquiries into race and gender, are adversely affected by interests. I argue that the biases Kitcher believes affect such inquiries are operative in all domains, but the prevalence of such biases does not support global pessimism. I argue further that in order to address the global pessimists' concerns, the scientific (...) community needs criticism from people with diverse interests and background assumptions. (shrink)
A striking feature of Confucius' grief at the death of his beloved disciple Yan Hui is its profound intensity, an intensity detectable nowhere else in the Analects. Like his disciples, the reader of the Analects may be puzzled by the depth of Confucius' grief in this instance. In distinct accounts, Philip Ivanhoe and Amy Olberding bring some measure of intelligibility to the Master's grief. While partially plausible, I think their offerings on the matter fall short of being fully satisfying. Specifically, (...) I argue that Olberding's proposal that Confucius loses certain developmental avenues after Hui's death should be augmented with the claim that the great depth of his grief largely follows from the importance of Confucius' expression of virtue in the lives of his disciples. It was Yan Hui who best facilitated his Master's expression of virtue, and with Hui's passing, Confucius loses an avenue to a robust expression of virtue, a loss he laments deeply. (shrink)
Accounts of ontic explanation have often been devised so as to provide an understanding of mechanism and of causation. Ontic accounts differ quite radically in their ontologies, and one of the latest additions to this tradition proposed by Peter Machamer, Lindley Darden and Carl Craver reintroduces the concept of activity. In this paper I ask whether this influential and activity-based account of mechanisms is viable as an ontic account. I focus on polygenic scenarios—scenarios in which the causal truths depend on (...) more than one cause. The importance of polygenic causation was noticed early on by Mill (1893). It has since been shown to be a problem for both causal-law approaches to causation (Cartwright 1983) and accounts of causation cast in terms of capacities (Dupré 1993; Glennan 1997, pp. 605-626). However, whereas mechanistic accounts seem to be attractive precisely because they promise to handle complicated causal scenarios, polygenic causation needs to be examined more thoroughly in the emerging literature on activity-based mechanisms. The activity-based account proposed in Machamer et al. (2000, pp. 1-25) is problematic as an ontic account, I will argue. It seems necessary to ask, of any ontic account, how well it performs in causal situations where—at the explanandum level of mechanism—no activity occurs. In addition, it should be asked how well the activity-based account performs in situations where there are too few activities around to match the polygenic causal origin of the explanandum. The first situation presents an explanandum-problem and the second situation presents an explanans-problem—I will argue—both of which threaten activity-based frameworks. (shrink)
Starting from a distinction between global and globalised and a definition of the concept of global threat for future generations, this paper aims to identify cognitive, moral and emotional phenomena that hinder to the adoption of effective policies against global warming. The main thesis of this paper is that it is difficult to reduce emissions of greenhouse gases mainly because the unlimited economic growth is the imperative of our company and the continuous increase of material goods and personal consumption is (...) what is closest to our idea of happiness and wellbeing. (shrink)
Examination of the subversive nature of philosophy as its students challenge the authority and practices of government agencies and organizations. Draws a series of connections between philosophically oriented protesters and questioners of authority ranging from Socrates to 2004 protesters at the U.S. Republican party’s presidential convention in 2004.
The genetic diversity argument (GDA) is one of the most commonly voiced objections to advances in reproductive and genetic technologies. According to the argument, scientific and technological developments in the realm of genetics and human reproduction will lead to lower genetic diversity, which will threaten the health and survivability of the human population. This discussion explicates and analyzes the GDA and challenges its empirical assumptions. It also discusses the possible significance of the GDA in our overall thinking about genetics and (...) human reproduction and examines two proposals for preserving "useful" genes. (shrink)
A discussion of respects in which climate change is expected to affect larger-scale bioethical issues, with some focus on the moral value of community understood as relationships of identity and solidarity.