New concepts may prove necessary to profit from the avalanche of sequence data on the genome, transcriptome, proteome and interactome and to relate this information to cell physiology. Here, we focus on the concept of large activity-based structures, or hyperstructures, in which a variety of types of molecules are brought together to perform a function. We review the evidence for the existence of hyperstructures responsible for the initiation of DNA replication, the sequestration of newly replicated origins of replication, cell division (...) and for metabolism. The processes responsible for hyperstructure formation include changes in enzyme affinities due to metabolite-induction, lipid-protein affinities, elevated local concentrations of proteins and their binding sites on DNA and RNA, and transertion. Experimental techniques exist that can be used to study hyperstructures and we review some of the ones less familiar to biologists. Finally, we speculate on how a variety of in silico approaches involving cellular automata and multi-agent systems could be combined to develop new concepts in the form of an Integrated cell (I-cell) which would undergo selection for growth and survival in a world of artificial microbiology. (shrink)
What is hermeneutics? -- The suffering stranger and the hermeneutics of trust -- Sandor Ferenczi : the analyst of last resort and the hermeneutics of trauma -- Frieda Fromm-Reichmann : incommunicable loneliness -- D.W. Winnicott : humanitarian without sentimentality -- Heinz Kohut : glimpsing the hidden suffering -- Bernard Brandchaft : liberating the incarcerated spirit.
The United States-Vietnam War appeared on television at the time and later in Hollywood movies. It is being interpreted again through documentary film as part of an international effort to bring attention to the devastating and continuing health effects of the American wartime use of the herbicide Agent Orange in Vietnam. This article analyzes documentary films about Vietnam and their representation of Agent Orange, disabilities, children, and gender.
"Many of you know about an important disagreement that Jenny Wade has with Spiral Dynamics, namely, whether orange and green are two different stages of development or whether they are two different paths through the same stage of development (see her book, Changes of Mind ). Both Don Beck and Jenny Wade are members of IC, so it's an in-house friendly disagreement. Also, this discussion is a little bit technical, and demands a general grasp of what we call a (...) phase-4 model--'all quadrants, all levels, all lines, all states'--but I'll go through it briefly for those who are interested. (shrink)
Nicole C. Karafyllis and Gotlind Ulshöfer (Eds): Sexualised Brains, Scientific Modelling of Emotional Intelligence from a Cultural Perspective Content Type Journal Article Category Book Review Pages 407-408 DOI 10.1007/s12376-009-0035-3 Authors Antje Kampf, School of Medicine of the Johannes Gutenberg University Mainz Institute for the History, Philosophy and Ethics of Medicine Am Pulverturm 13 55131 Mainz Germany Journal Medicine Studies Online ISSN 1876-4541 Print ISSN 1876-4533 Journal Volume Volume 1 Journal Issue Volume 1, Number 4.
Ecofeminism, a new vein in feminist theory, critiques the ontology of domination, whereby living beings are reduced to the status of objects, which diminishes their moral significance, enabling their exploitation, abuse, and destruction. This article explores the possibility of an ecofeminist literary and cultural practice, whereby the text is not reduced to an "it" but rather recognized as a "thou," and where new modes of relationship-dialogue, conversation, and meditative attentiveness-are developed.
Reflectance physicalism only provides a partial picture of the ontology of color. Byrne & Hilbert’ account is unsatisfactory because the replacement of reflectance functions by productance functions is ad hoc, unclear, and only leads to new problems. Furthermore, the effects of color contrast and differences in illumination are not really taken seriously: Too many “real” colors are tacitly dismissed as illusory, and this for arbitrary reasons. We claim that there cannot be an all-embracing ontology for color.
SUPPOSE that I report that I have at this moment a roundish, blurry-edged after-image which is yellowish towards its edge and is orange towards its centre. What is it that I am reporting?l One answer to this question might be that I am not reporting anything, that when I say that it looks to me as though there is a roundish yellowy orange patch of light On the wall I am expressing some sort of temptation, the temptation to (...) say that there is a roundish yellowy orange patch on the wall (though I may know that there is not such a patch on the wall). This is perhaps Wittgenstein's view in the Philosophical Investigations (see paragraphs 367, 370). Similarly, when I "report" a pain, I am not really reporting anything (or, if you like, I am reporting in a queer sense of "reporting"), but am doing a sophisticated sort of wince. (See paragraph 244: "The verbal expression of pain replaces crying and docs not describe it." Nor docs it describe anything else?)2 I prefer most of the time to discuss an afterimage rather than a pain, because the word "pain" brings in something which is irrelevant to my purpose: the notion of "distress." I think that "he is in pain" entails "he is in distress," that is, that he is in a certain agitation-condition.3 Similarly, to say "I am in pain" may be to do more than "replace pain behavior": it may be partly to report something, though this something is quite nonmysterious, being an agitation-condition, and so susceptible of behavioristic analysis. The suggestion I wish if possible to avoid is a different one, namely that "I am in pain" is a genuine report, and that what it reports is an irreducibly psychical something. And similarly the suggestion I wish to resist is also that to say "I have a yellowish orange after-image" is to report something irreducibly psychical. (shrink)
It has become standard for feminist philosophers of language to analyze Catherine MacKinnon's claim in terms of speech act theory. Backed by the Austinian observation that speech can do things and the legal claim that pornography is speech, the claim is that the speech acts performed by means of pornography silence women. This turns upon the notion of illocutionary silencing, or disablement. In this paper I observe that the focus by feminist philosophers of language on the failure to achieve uptake (...) for illocutionary acts serves to group together different kinds of illocutionary silencing which function in very different ways. (shrink)
Various authors debate the question of whether neuroscience is relevant to criminal responsibility. However, a plethora of different techniques and technologies, each with their own abilities and drawbacks, lurks beneath the label “neuroscience”; and in criminal law responsibility is not a single, unitary and generic concept, but it is rather a syndrome of at least six different concepts. Consequently, there are at least six different responsibility questions that the criminal law asks – at least one for each responsibility concept – (...) and, I will suggest, a multitude of ways in which the techniques and technologies that comprise neuroscience might help us to address those diverse questions. In a way, on my account neuroscience is relevant to criminal responsibility in many ways, but I hesitate to state my position like this because doing so obscures two points which I would rather highlight: one, neither neuroscience nor criminal responsibility are as unified as that; and two, the criminal law asks many different responsibility questions and not just one generic question. (shrink)
It is generally agreed that vague predicates like ‘red’, ‘rich’, ‘tall’, and ‘bald’, have borderline cases of application. For instance, a cloth patch whose color lies midway between a definite red and a definite orange is a borderline case for ‘red’, and an American man five feet eleven inches in height is (arguably) a borderline case for ‘tall’. The proper analysis of borderline cases is a matter of dispute, but most theorists of vagueness agree at least in the thought (...) that borderline cases for vague predicate ‘ ’ are items whose satisfaction of ‘ ’ is in some sense unclear or problematic: it is unclear whether or not the patch is red, unclear whether or not the man is tall.1 For example, Lynda Burns cites a widespread view as holding that borderline cases “are not definitely within the positive or negative extension of the predicate. … Border- line cases are seen as falling within a gap between the cases of definite application of the predicate and cases of definite application of its negation” (1995, 30). Michael Tye writes that the “concept of a border- line case is the concept of a case that is neither definitely in nor defi- nitely out” (1994b, 18). (shrink)
Luck egalitarians think that considerations of responsibility can excuse departures from strict equality. However critics argue that allowing responsibility to play this role has objectionably harsh consequences. Luck egalitarians usually respond either by explaining why that harshness is not excessive, or by identifying allegedly legitimate exclusions from the default responsibility-tracking rule to tone down that harshness. And in response, critics respectively deny that this harshness is not excessive, or they argue that those exclusions would be ineffective or lacking in justification. (...) Rather than taking sides, after criticizing both positions I also argue that this way of carrying on the debate – i.e. as a debate about whether the harsh demands of responsibility outweigh other considerations, and about whether exclusions to responsibility-tracking would be effective and/or justified – is deeply problematic. On my account, the demands of responsibility do not – in fact, they can not – conflict with the demands of other normative considerations, because responsibility only provides a formal structure within which those other considerations determine how people may be treated, but it does not generate its own practical demands. (shrink)
This paper considers the question ‘How should institutions enable people to meet their needs in situations where there is no guarantee that all needs can be met?’ After considering and rejecting several simple principles for meeting needs, it suggests a new effectiveness principle that 1) gives greater weight to the needs of the less well off and 2) gives weight to enabling a greater number of people to meet their needs. The effectiveness principle has some advantage over the main competitors (...) including a principle suggested by David Miller in Principles of Social Justice. Miller argues that his principle accounts for the existing data on individuals’ intuitions about meeting needs. The effectiveness principle better accounts for this data. Furthermore, this paper presents a new experiment on intuitions about meeting need that is consistent with the effectiveness principle but not Miller’s principle. (shrink)
The claim that a functional kind is multiply realized is typically motivated by appeal to intuitive examples. We are seldom told explicitly what the relevant structures are, and people have often preferred to rely on general intuitions in these cases. This article deals with the problem by explaining how to understand the proper relation between structural kinds and the functions they realize. I will suggest that the structural kinds that realize a function can be properly identified by attending to the (...) context of functional explanation. *Received June 2006; revised June 2009. †To contact the author, please write to: Department of Philosophy, Seton Hall University, 400 South Orange Ave., South Orange, NJ 07079; e‐mail: email@example.com. (shrink)
What should environmentalists say about free trade? Many environmentalists object to free trade by appealing the “Race to the Bottom Argument.” This argument is inconclusive, but there are reasons to worry about unrestricted free trade’s environmental effects nonetheless; the rules of trade embodied in institutions such as the World Trade Organization may be unjustifiable. Programs to compensate for trade-related environmental damage, appropriate trade barriers, and consumer movements may be necessary and desirable. At least environmentalists should consider these alternatives to unrestricted (...) free trade if they do not prevent the achievement of other important moral objectives, can efficiently reduce environmental problems, and institutional safeguards can prevent their abuse. (shrink)
In this paper, we present a conditional argument for the moral permissibility of some kinds of infanticide. The argument is based on a certain view of consciousness and the claim that there is an intimate connection between consciousness and infanticide. In bare outline, the argument is this: it is impermissible to intentionally kill a creature only if the creature is conscious; it is reasonable to believe that there is some time at which human infants are conscious; therefore, it is reasonable (...) to believe that it is permissible to intentionally kill some human infants. (shrink)
These are some of the rules of classification and definition. But although nothing is more important in science than classifying and defining well, we need say no more about it here, because it depends much more on our knowledge of the subject matter being discussed than on the rules of logic. (Arnauld and Nicole (1683/1996) p.128).
It can happen that a single surface S, viewed in normal conditions, looks pure blue (“true blue”) to observer John but looks blue tinged with green to a second observer, Jane, even though both are normal in the sense that they pass the standard psychophysical tests for color vision. Tye (2006a) ﬁnds this situation prima facie puzzling, and then oﬀers two diﬀerent “solutions” to the puzzle.1 The ﬁrst is that at least one observer misrepresents S’s color because, though normal in (...) the sense explained, she is not a Normal color observer: her color detection system is not operating in the current condition in the way that Mother Nature intended it to operate. His second solution involves the idea that Mother Nature designed our color detection systems to be reliable with respect to the detection of coarse-grained colors (e.g., blue, green, yellow, orange), but our capacity to represent the ﬁne-grained colors (e.g., true blue, blue tinged with green) is an undesigned spandrel. On this second solution, it is consistent with the variation between John and Jane that both represent the color of S in a way that complies with Mother Nature’s intentions: both represent S as exemplifying the coarse-grained color blue, and since (we may assume) S is in fact blue, both represent it veridically. Of course, they also represent ﬁne-grained colors of S, and, according to Tye, at most one of these representations is veridical (Tye says that only God knows which). But at the level of representation for which Mother Nature designed our color detection systems, both John and Jane (qua Normal observers) are reliable detectors. (shrink)
In this paper I argue that Beall and Restall's claim that there is one true logic of metaphysical modality is incompatible with the formulation of logical pluralism that they give. I investigate various ways of reconciling their pluralism with this claim, but conclude that none of the options can be made to work.
This thesis considers two allegations which conservatives often level at no-fault systems — namely, that responsibility is abnegated under no-fault systems, and that no-fault systems under- and over-compensate. I argue that although each of these allegations can be satisfactorily met – the responsibility allegation rests on the mistaken assumption that to properly take responsibility for our actions we must accept liability for those losses for which we are causally responsible; and the compensation allegation rests on the mistaken assumption that tort (...) law’s compensatory decisions provide a legitimate norm against which no-fault’s decisions can be compared and criticized – doing so leads in a direction which is at odds with accident law reform advocates’ typical recommendations. On my account, accident law should not just be reformed in line with no-fault’s principles, but rather it should be completely abandoned since the principles that protect no- fault systems from the conservatives’ two allegations are incompatible with retaining the category of accident law, they entail that no-fault systems are a form of social welfare and not accident law systems, and that under these systems serious deprivation – and to a lesser extent causal responsibility – should be conditions of eligibility to claim benefits. (shrink)
Fred Adams and collaborators advocate a view on which empty-name sentences semantically encode incomplete propositions, but which can be used to conversationally implicate descriptive propositions. This account has come under criticism recently from Marga Reimer and Anthony Everett. Reimer correctly observes that their account does not pass a natural test for conversational implicatures, namely, that an explanation of our intuitions in terms of implicature should be such that we upon hearing it recognize it to be roughly correct. Everett argues that (...) the implicature view provides an explanation of only some our intuitions, and is in fact incompatible with others, especially those concerning the modal profile of sentences containing empty names. I offer a pragmatist treatment of empty names based upon the recognition that the Gricean distinction between what is said and what is implicated is not exhaustive, and argue that such a solution avoids both Everett’s and Reimer’s criticisms.Selon Fred Adams et ses collaborateurs, les phrases comportant des noms propres vides codent sémantiquement des propositions incomplètes, bien qu’elles puissent être utilisées pour impliquer des propositions descriptives dans le contexte d’une conversation. Marga Reimer et Anthony Everett ont récemment critiqué cette théorie. Reimer note judicieusement que leur théorie ne résiste pas à l’examen naturel des implications conversationnelles; une explication de nos intuitions concernant l’implication doit être telle que lorsque nous l’entendons, elle nous apparaît globalement correcte. Everett soutient que la théorie de l’implication ne parvient à expliquer qu’un certain nombre de nos intuitions et reste incompatible avec d’autres, notamment celles qui concernent la dimension modale des phrases contenant des noms propres vides. Je propose ici un traitement pragmatiste des noms propres vides fondé sur l’observation que la distinction Gricéenne entre ce qui est dit et ce qui est impliqué n’est pas exhaustive; je soutiens que cette solution échappe aux critiques d’Everett et de Reimer. (shrink)
At the heart of the underdetermination of scientific theory by evidence is the simple idea that the evidence available to us at a given time may fail to determine what beliefs we should hold in response to it. In a textbook example, if I all I know is that you spent $10 on apples and oranges and that apples cost $1 while oranges cost $2, then I know that you did not buy six oranges, but I do not know whether (...) you bought one orange and eight apples, two oranges and six apples, and so on. A simple scientific example can be found in the rationale behind the sensible methodological adage that “correlation does not imply causation”. If watching lots of cartoons causes children to be more violent in their playground behavior then we should (barring complications) expect to find a correlation between levels of cartoon viewing and violent playground behavior. But that is also what we would expect to find if children who are prone to violence tend to enjoy and seek out cartoons more than other children, or if propensities to violence and increased cartoon viewing are both caused by some third factor (like general parental neglect or excessive consumption of jellybeans). So a high correlation between cartoon viewing and violent playground behavior is evidence that (by itself) simply underdetermines what we should believe about the causal relationship between these two activities. As we will see, however, the challenge of distinguishing correlation from causation is far from the only important circumstance in which underdetermination is thought to arise in scientific inquiry. (shrink)
Egalitarians must address two questions: i. What should there be an equality of, which concerns the currency of the ‘equalisandum’; and ii. How should this thing be allocated to achieve the so-called equal distribution? A plausible initial composite answer to these two questions is that resources should be allocated in accordance with choice, because this way the resulting distribution of the said equalisandum will ‘track responsibility’ — responsibility will be tracked in the sense that only we will be responsible for (...) the resources that are available to us, since our allocation of resources will be a consequence of our own choices. But the effects of actual choices should not be preserved until the prior effects of luck in constitution and circumstance are first eliminated. For instance, people can choose badly because their choice-making capacity was compromised due to a lack of intelligence (i.e. due to constitutional bad luck), or because only bad options were open to them (i.e. due to circumstantial bad luck), and under such conditions we are not responsible for our choices. So perhaps a better composite answer to our two questions (from the perspective of tracking responsibility) might be that resources should be allocated so as to reflect people’s choices, but only once those choices have been corrected for the distorting effects of constitutional and circumstantial luck, and on this account choice preservation and luck elimination are two complementary aims of the egalitarian ideal. Nevertheless, it is one thing to say that luck’s effects should be eliminated, but quite another to figure out just how much resource redistribution would be required to achieve this outcome, and so it was precisely for this purpose that in 1981 Ronald Dworkin developed the ingenuous hypothetical insurance market argumentative device (HIMAD), which he then used in conjunction with the talent slavery (TS) argument, to arrive at an estimate of the amount of redistribution that would be required to reduce the extent of luck’s effects. However recently Daniel Markovits has cast doubt over Dworkin’s estimates of the amount of redistribution that would be required, by pointing out flaws with his understanding of how the hypothetical insurance market would function. Nevertheless, Markovits patched it up and he used this patched-up version of Dworkin’s HIMAD together with his own version of the TS argument to reach his own conservative estimate of how much redistribution there ought to be in an egalitarian society. Notably though, on Markovits’ account once the HIMAD is patched-up and properly understood, the TS argument will also allegedly show that the two aims of egalitarianism are not necessarily complementary, but rather that they can actually compete with one another. According to his own ‘equal-agent’ egalitarian theory, the aim of choice preservation is more important than the aim of luck elimination, and so he alleges that when the latter aim comes into conflict with the former aim then the latter will need to be sacrificed to ensure that people are not subordinated to one another as agents. I believe that Markovits’ critique of Dworkin is spot on, but I also think that his own positive thesis — and hence his conclusion about how much redistribution there ought to be in an egalitarian society — is flawed. Hence, this paper will begin in Section I by explaining how Dworkin uses the HIMAD and his TS argument to estimate the amount of redistribution that there ought to be in an egalitarian society — this section will be largely expository in content. Markovits’ critique of Dworkin will then be outlined in Section II, as will be his own positive thesis. My critique of Markovits, and my own positive thesis, will then make a fleeting appearance in Section III. Finally, I will conclude by rejecting both Dworkin’s and Markovits’ estimates of the amount of redistribution that there ought to be in an egalitarian society, and by reaffirming the responsibility-tracking egalitarian claim that choice preservation and luck elimination are complementary and not competing egalitarian aims. (shrink)
It could be argued that tort law is failing, and arguably an example of this failure is the recent public liability and insurance (‘PL&I’) crisis. A number of solutions have been proposed, but ultimately the chosen solution should address whatever we take to be the cause of this failure. On one account, the PL&I crisis is a result of an unwarranted expansion of the scope of tort law. Proponents of this position sometimes argue that the duty of care owed by (...) defendants to plaintiffs has expanded beyond reasonable levels, such that parties who were not really responsible for another’s misfortune are successfully sued, while those who really were to blame get away without taking any responsibility. However people should take responsibility for their actions, and the only likely consequence of allowing them to shirk it is that they and others will be less likely to exercise due care in the future, since the deterrents of liability and of no compensation for accidentally self-imposed losses will not be there. Others also argue that this expansion is not warranted because it is inappropriately fueled by ‘deep pocket’ considerations rather than by considerations of fault. They argue that the presence of liability insurance sways the judiciary to award damages against defendants since they know that insurers, and not the defendant personally, will pay for it in the end anyway. But although it may seem that no real person has to bear these burdens when they are imposed onto insurers, in reality all of society bears them collectively when insurers are forced to hike their premiums to cover these increasing damages payments. In any case, it seems unfair to force insurers to cover these costs simply because they can afford to do so. If such an expansion is indeed the cause of the PL&I crisis, then a contraction of the scope of tort liability, and a pious return to the fault principle, might remedy the situation. However it could also be argued that inadequate deterrence is the cause of this crisis. On this account the problem would lie not with the tort system’s continued unwarranted expansion, but in the fact that defendants really have been too careless. If prospective injurers were appropriately deterred from engaging in unnecessarily risky activities, then fewer accidents would ever occur in the first place, and this would reduce the need for litigation at its very source. If we take this to be the cause of tort law’s failure then our solution should aim to improve deterrence. Glen Robinson has argued that improved deterrence could be achieved if plaintiffs were allowed to sue defendants for wrongful exposure to ongoing risks of future harm, even in the absence of currently materialized losses. He argues that at least in toxic injury type cases the tortious creation of risk [should be seen as] an appropriate basis of liability, with damages being assessed according to the value of the risk, as an alternative to forcing risk victims to abide the outcome of the event and seek damages only if and when harm materializes. In a sense, Robinson wishes to treat newly-acquired wrongful risks as de-facto wrongful losses, and these are what would be compensated in liability for risk creation (‘LFRC’) cases. Robinson argues that if the extent of damages were fixed to the extent of risk exposure, all detected unreasonable risk creators would be forced to bear the costs of their activities, rather than only those who could be found responsible for another’s injuries ‘on the balance of probabilities’. The incidence of accidents should decrease as a result of improved deterrence, reduce the ‘suing fest’, and so resolve the PL&I crisis. So whilst the first solution involves contracting the scope of tort liability, Robinson’s solution involves an expansion of its scope. However Robinson acknowledges that LFRC seems prima facie incompatible with current tort principles which in the least require the presence of plaintiff losses, defendant fault, and causation to be established before making defendants liable for plaintiffs’ compensation. Since losses would be absent in LFRC cases by definition, the first evidentiary requirement would always be frustrated, and in its absence proof of defendant fault and causation would also seem scant. If such an expansion of tort liability were not supported by current tort principles then it would be no better than proposals to switch accident law across to no-fault, since both solutions would require comprehensive legal reform. However Robinson argues that the above three evidentiary requirements could be met in LFRC cases to the same extent that they are met in other currently accepted cases, and hence that his solution would therefore be preferable to no-fault solutions as it would only require incremental but not comprehensive legal reform. Although I believe that actual losses should be present before allowing plaintiffs to seek compensation, I will not present a positive argument for this conclusion. My aim in this paper is not to debate the relative merits of Robinson’s solution as compared to no-fault solutions, nor to determine which account of the cause of the PL&I crisis is closer to the truth, but rather to find out whether Robinson’s solution would indeed require less radical legal reform than, for example, proposed no-fault solutions. I will argue that Robinson fails to show that current tort principles would support his proposed solution, and hence that his solution is at best on an even footing with no-fault solutions since both would require comprehensive legal reform. (shrink)
Is nanotechnology-based human enhancement morally permissible? One reason to question such enhancement stems from a concern for preserving our species. It is harder than one might think, however, to explain what could be wrong with altering our own species. One possibility is to turn to the environmental ethics literature. Perhaps some of the arguments for preserving other species can be applied against nanotechnology-based human enhancements that alter human nature. This paper critically examines the case for using two of the strongest (...) arguments in the environmental ethics literature to show that nanotechnology-based human enhancements are impermissible: 1) Our species, like many other naturally occurring species, has aesthetic value. So, nanotechnology-based human enhancements that alter our species should be prohibited. 2) Our species plays valuable ecological roles. Nanotechnology-based human enhancements that alter our species are likely to interfere with our species playing our ecologically valuable roles. So, such enhancements should be prohibited. Neither argument, ultimately, proves conclusive. The paper concludes, however, that considerations underlying both arguments may show us that some nanotechnology-based human enhancements are impermissible. (shrink)
: In The Morality of Freedom, Joseph Raz argues against a right to autonomy. This argument helps to distinguish his theory from his competitors'. For, many liberal theories ground such a right. Some even defend entirely autonomy-based accounts of rights. This paper suggests that Raz's argument against a right to autonomy raises an important dilemma for his larger theory. Unless his account of rights is limited in some way, Raz's argument applies against almost all (purported) rights, not just a right (...) to autonomy. But, on the traditional way of limiting accounts like his, Raz's account actually supports the conclusion that people have a right to autonomy. So, unless there is another way of limiting his account that does not have this consequence, Raz's argument against a right to autonomy does not go through. (shrink)
Most men and nearly all women have non-defective colour vision, as measured by standard colour tests such as those of Ishihara and Farns- worth. But people vary, according to gender, race and age in their per- formance in matching experiments. For example, when subjects are shown a screen, one half of which is lit by a mixture of red and green lights and the other by yellow or orange light, and they are asked to ad- just the mixture of (...) lights so as to make the two halves of the screen match in colour, they disagree about the location of the match. Where one male subject sees the two sides of the screen as being the same in colour, an- other female subject may see one side as a little redder or greener. And there are corresponding differences with age and race. (shrink)
Anyone familiar with The Economist knows the mantra: Free trade will ameliorate poverty by increasing growth and reducing inequality. This paper suggests that problems underlying measurement of poverty, inequality, and free trade provide reason to worry about this argument. Furthermore, the paper suggests that better evidence is necessary to establish that free trade is causing inequality and poverty to fall. Experimental studies usually provide the best evidence of causation. So, the paper concludes with a call for further research into the (...) prospects for ethically acceptable experimental testing of free trade's impact on poverty and inequality. Although the paper is unabashedly methodological, its conclusions bear on many ethical debates. Ethicists sometimes argue, for instance, that there is reason to encourage free trade because they believe free trade is decreasing poverty and inequality. Clarifying the empirical facts may not settle ethical debates but it may inform them. (shrink)
A heterogeneous survey sample of for-profit, non-profit and government employees revealed that organizational factors but not personal characteristics were significant antecedents of misconduct and job satisfaction. Formal organizational compliance practices and ethical climate were independent predictors of misconduct, and compliance practices also moderated the relationship between ethical climate and misconduct, as well as between pressure to compromise ethical standards and misconduct. Misconduct was not predicted by level of moral reasoning, age, sex, ethnicity, job status, or size and type of organization. (...) Demographic variables predicted job satisfaction and organizational variables added significant incremental variance. Results suggest the importance of promoting a moral organization through the words and actions of senior managers and supervisors, independent of formal mechanisms such as codes of conduct. (shrink)
Could neuroimaging evidence help us to assess the degree of a person’s responsibility for a crime which we know that they committed? This essay defends an affirmative answer to this question. A range of standard objections to this high-tech approach to assessing people’s responsibility is considered and then set aside, but I also bring to light and then reject a novel objection—an objection which is only encountered when functional (rather than structural) neuroimaging is used to assess people’s responsibility.
In a powerful and original contribution to the history of ideas, Hannah Dawson explores the intense preoccupation with language in early-modern philosophy, and presents a groundbreaking analysis of John Locke's critique of words. By examining a broad sweep of pedagogical and philosophical material from antiquity to the late seventeenth century, Dr Dawson explains why language caused anxiety in writers such as Montaigne, Bacon, Descartes, Hobbes, Gassendi, Nicole, Pufendorf, Boyle, Malebranche and Locke. Locke, Language and Early-Modern Philosophy demonstrates that new (...) developments in philosophy, in conjunction with weaknesses in linguistic theory, resulted in serious concerns about the capacity of words to refer to the world, the stability of meaning, and the duplicitous power of words themselves. Dr Dawson shows that language so fixated all manner of early-modern authors because it was seen as an obstacle to both knowledge and society. She thereby uncovers a novel story about the problem of language in philosophy, and in the process reshapes our understanding of early-modern epistemology, morality and politics. (shrink)
In "Torts, Egalitarianism and Distributive Justice" (Ashgate, 2007), Tsachi Keren-Paz presents impressingly detailed analysis that bolsters the case in favour of incremental tort law reform. However, although this book's greatest strength is the depth of analysis offered, at the same time supporters of radical law reform proposals may interpret the complexity of the solution that is offered (and its respective cost) as conclusive proof that tort law can only take adequate account of egalitarian aims at an unacceptably high cost.
This is a report on the 3-day workshop The Neuroscience of Responsibility that was held in the Philosophy Department at Delft University of Technology in The Netherlands during February 11th–13th, 2010. The workshop had 25 participants from The Netherlands, Germany, Italy, UK, USA, Canada and Australia, with expertise in philosophy, neuroscience, psychology, psychiatry and law. Its aim was to identify current trends in neurolaw research related specifically to the topic of responsibility, and to foster international collaborative research on this topic. (...) The workshop agenda was constructed by the participants at the start of each day by surveying the topics of greatest interest and relevance to participants. In what follows, we summarize (1) the questions which participants identified as most important for future research in this field, (2) the most prominent themes that emerged from the discussions, and (3) the two main international collaborative research project plans that came out of this meeting. (shrink)
Derrida's Specters of Marx asks whether and how we could inherit Marx today: whether we might find, in a certain spirit of Marx, the critical resources to challenge resurgent liberal ideals, without this challenge assuming a dogmatic or totalitarian form. Derrida's own response to this question involves a curious move: a material transformation of Marx's text, in which Derrida first foreshadows, and then carries out, the excision of a single sentence from the pivotal passage in which Marx christens the commodity (...) fetish. The excision subtly transforms the meaning of Marx's text and, in the process, acts out a vision of inheritance as an active, transformative performance, rather than as a passive transmission of inherited content to its heirs. In this paper, I explore the way in which Derrida foreshadows and then effects this curious elision. I highlight the distinctive understanding of transformative inheritance at the heart of Derrida's text, and also pose the question of why Derrida should effect this particular transformation in the search for a certain deconstructive spirit in Marx's work. (shrink)
This anthology contains excerpts from some thirty-two important seventeenth- and eighteenth-century moral philosophers. Including a substantial introduction and extensive bibliographies, the anthology facilitates the study and teaching of early modern moral philosophy in its crucial formative period. As well as well-known thinkers such as Hobbes, Hume, and Kant, there are excerpts from a wide range of philosophers never previously assembled in one text, such as Grotius, Pufendorf, Nicole, Clarke, Leibniz, Malebranche, Holbach and Paley. Originally issued as a two-volume edition (...) in 1990, the anthology is now re-issued with a new foreword by Professor Schneewind, as a one-volume anthology to serve as a companion to his highly successful history of modern ethics, The Invention of Autonomy. The anthology provides many of the sources discussed in The Invention of Autonomy and taken together the two volumes will be an invaluable resource for the teaching of the history of modern moral philosophy. (shrink)
Garrath Williams claims that truly responsible people must possess a “capacity … to respond [appropriately] to normative demands” (2008:462). However, there are people whom we would normally praise for their responsibility despite the fact that they do not yet possess such a capacity (e.g. consistently well-behaved young children), and others who have such capacity but who are still patently irresponsible (e.g. some badly-behaved adults). Thus, I argue that to qualify for the accolade “a responsible person” one need not possess such (...) a capacity, but only to be earnestly willing to do the right thing and to have a history that testifies to this willingness. Although we may have good reasons to prefer to have such a capacity ourselves, and to associate ourselves with others who have it, at a conceptual level I do not think that such considerations support the claim that having this capacity is a necessary condition of being a responsible person in the virtue sense. (shrink)
: According to a consensus of psycho-physiological and philosophical theories, color sensations (or qualia) are generated in a cerebral "space" fed from photon-photoreceptor interaction (producing "metamers") in the retina of the eye. The resulting "space" has three dimensions: hue (or chroma), saturation (or "purity"), and brightness (lightness, value or intensity) and (in some versions) is further structured by primitive or landmark "colors"—usually four, or six (when white and black are added to red, yellow, green and blue). It has also been (...) proposed that there are eleven semantic universals—labeling the previous six plus the "intermediaries" of orange, pink, brown, purple, and gray. There are many versions of this consensus, but they all aim to provide ontological, epistemological and semantic blueprints for the brute fact of the reality of color ordained by Nature (evolution). In contrast to this consensus, we have argued that "seeing color" is not a matter of light waves impacting on our eyes, producing sensations to be categorized and labeled in the "color space" in the brain. While electrochemical events may unproblematically be regarded as the causal precondition for seeing color, the reception of sensations in "the color space" as semantically labeled natural categories, kinds, or information, is a "just so" story: it is Wittgenstein's beetle in a box. In contrast we consider that the authority of this consensus might better be regarded not as the result of the truth-tracking of nature, but as the sociohistorical outcome of philosophical presuppositions, scientific theories, experimental practices, technological apparatus, and their feed forward into the lifeworld. The question we shall therefore explore is whether, or to what extent, we ourselves are changed, as the conditions of production of color science change. Thus we are doing a kind of anthropology at two levels: of color science itself (and its effect on our own lifeworld), and of those studied by the "anthropology of color". As befits this stance we are agnostic about the theoretical entities of color science (cf. van Fraassen 2001), and within this new context, we propose to cross-cut object-and-subject, organism-and-environment (the bedrock of color science) in socio-historical ways. Our approach is in part inspired by, but not the same as, that of Gibson, in that we wish to pursue the notion of "social affordances" (Burmudez 1995). We suggest that color has become a naturalization through science-based technologies, which, through praxes and materializations, have become the perceptual and cultural entities that structure experience and understanding in the lifeworld. It is this naturalization that we shall refer to and characterize as "the historically inflected exosomatic organ". Consequently we shall explore the historical ontology of "color" without assuming an underlying biological constant (Dupré 2001). In part 1 we show the flimsiness of the evidence for the three dimensions of color, borrowed from physics, and fine-tuned to a "standard observer" (a "spectral creature" with a phenomenal "color space"). In part 2 we address the structuring of hue through the development of color circles and color spaces. This is followed by a review of the evidence for unique hues. Again the evidence is shown to be flimsy. We then show that an isolated domain of color is a particular kind of model, not a "natural given". In part 3, after reviewing what is referred to as "the isomorphy thesis," we discuss the exemplary case study of Berlin and Kay (1969). This illustrates the pull of stadial models presupposed by their evolutionary theory of color language. The Berlin and Kay paradigm proposes that American English color terms are incorrigible and can provide the universal metalanguage. We conclude by presenting an alternative account, namely that we ourselves are changed as the conditions of production of color science change. We argue that it is better to regard "seeing-color" as a historically inflected exosomatic organ that provides social affordances for those trained to grasp them. (shrink)
The way in which we characterize the structural and functional differences between psychopath and normal brains – either as biological disorders or as mere biological differences – can influence our judgments about psychopaths’ responsibility for criminal misconduct. However, Marga Reimer (Neuroethics 1(2):14, 2008) points out that whether our characterization of these differences should be allowed to affect our judgments in this manner “is a difficult and important question that really needs to be addressed before policies regarding responsibility... can be implemented (...) with any confidence”. This paper is an attempt to address Reimer’s difficult and important question; I argue that irrespective of which of these two characterizations is chosen, our judgments about psychopaths’ responsibility should not be affected, because responsibility hinges not on whether a particular difference is (referred to as) a disorder or not, but on how that difference affects the mental capacities required for moral agency. (shrink)