In view of rapid and dramatic technological change, it is important to take the special requirements of privacy protection into account early on, because new technological systems often contain hidden dangers which are very difficult to overcome after the basic design has been worked out. So it makes all the more sense to identify and examine possible data protection problems when designing new technology and to incorporate privacy protection into the overall design, instead of having to come up with laborious (...) and time-consuming “patches” later on. This approach is known as “Privacy by Design” (PbD). (shrink)
An accountability-based privacy governance model is one where organizations are charged with societal objectives, such as using personal information in a manner that maintains individual autonomy and which protects individuals from social, financial and physical harms, while leaving the actual mechanisms for achieving those objectives to the organization. This paper discusses the essential elements of accountability identified by the Galway Accountability Project, with scholarship from the Centre for Information Policy Leadership at Hunton & Williams LLP. Conceptual Privacy by Design principles (...) are offered as criteria for building privacy and accountability into organizational information management practices. The authors then provide an example of an organizational control process that uses the principles to implement the essential elements. Initially developed in the ‘90s to advance privacy-enhancing information and communication technologies, Dr. Ann Cavoukian has since expanded the application of Privacy by Design principles to include business processes. (shrink)
An introductory message from Peter Hustinx, European Data Protection Supervisor, delivered at Privacy by Design: The Definitive Workshop. This presentation looks back at the origins of Privacy by Design, notably the publication of the first report on “Privacy Enhancing Technologies” by a joint team of the Information and Privacy Commissioner of Ontario, Canada and the Dutch Data Protection Authority in 1995. It looks ahead and adresses the question of how the promises of these concepts could be delivered in practice.
Recently, an associative learning account of cognitive control has been suggested (Verguts & Notebaert, 2009). In this so-called adaptation by binding theory, Hebbian learning of stimulus–stimulus and stimulus–response associations is assumed to drive the adaptation of human behavior. In this study, we evaluated the validity of the adaptation-by-binding account for the case of implicit learning of regularities within a stimulus set (i.e., the frequency of specific unit digit combinations in a two-digit number magnitude comparison task) and their association with a (...) particular response. Our data indicated that participants indeed learned these regularities and adapted their behavior accordingly. In particular, influences of cognitive control were even able to override the numerical distance effect—one of the most robust effects in numerical cognition research. Thus, the general cognitive processes involved in two-digit number magnitude comparison seem much more complex than previously assumed. Multi-digit number magnitude comparison may not be automatic and inflexible but influenced by processes of cognitive control being highly adaptive to stimulus set properties and task demands on multiple levels. (shrink)
Issues in facing and solving the problem of sexual misconduct -- Cases of teachers who become involved in consensual relationships -- Cases of coaches who become involved in sexual misconduct -- Cases of predator teachers -- Training teachers, coaches, and students to avoid sexual misconduct.
Whistleblowing by employees to regulatory agencies and other parties external to the organization can have serious consequences both for the whistleblower and the company involved. Research has largely focused on individual and group variables that affect individuals'' decision to blow the whistle on perceived wrongdoing.This study examined the relationship between selected organizational characteristics and the perceived level of external whistleblowing by employees in 240 organizations. Data collected in a nationwide survey of human resource executives were analyzed using analysis of variance.
On the face of it, some of our knowledge is of moral facts (for example, that this promise should not be broken in these circumstances), and some of it is of non-moral facts (for example, that the kettle has just boiled). But, some argue, there is reason to believe that we do not, after all, know any moral facts. For example, according to J. L. Mackie, if we had moral knowledge (‘‘if we were aware of [objective values]’’), ‘‘it would have (...) to be by some special faculty of moral perception or intuition, utterly different from our ordinary ways of knowing everything else’’(1977,p.38).But wehavenosuchspecialfaculty.So,wehavenomoralknowledge. Following Mackie, let us distinguish two questions: Q1: Assuming that we have moral knowledge, how do we have it? Q2: Do we in fact have any moral knowledge? In response to the first question, I argue that if we have moral knowledge, we have some of it in the same way we have knowledge of our immediate environment: by perception. Many people think that this answer leads to moral skepticism, because they think that we obviously cannot have moral knowledge by perception. But I will argue that this is incorrect. The plan for the paper is as follows. In Sections 2–4, I work up to my answer to Q1 by considering rivals. In Section 5, I explain what marks my answer to Q1 as a distinctive view, and defend it. In Section 6, I briefly discuss how this answer to Q1 affects what we say in response to Q2. (shrink)
The best grounds for accepting contextualism concerning knowledge attributions are to be found in how knowledge-attributing (and knowledge-denying) sentences are used in ordinary, nonphilosophical talk: What ordinary speakers will count as “knowledge” in some non-philosophical contexts they will deny is such in others. Contextualists typically appeal to pairs of cases that forcefully display the variability in the epistemic standards that govern ordinary usage: A “low standards” case (henceforth, “LOW”) in which a speaker seems quite appropriately and truthfully to ascribe knowledge (...) to a subject will be paired with a “high standards” case (“HIGH”) in which another speaker in a quite different and more demanding context seems with equal propriety and truth to say that the same subject (or a similarly positioned subject) does not know. The contextualist argument based on such cases is driven by the premises that the positive attribution of knowledge in LOW is true, and that the denial of knowledge in HIGH is true. And where the contextualist has constructed HIGH and LOW wisely, those premises are in turn powerfully supported by the two mutually reinforcing strands of evidence that both of the claims intuitively seem true, and that both claims are perfectly appropriate. The resulting argument for contextualism is very powerful indeed, but I am on the offensive making that case in another paper: “The Ordinary Language Basis for Contextualism and the New Invariantism.”. (shrink)
In 1997, a Scottish surgeon by the name of Robert Smith was approached by a man with an unusual request: he wanted his apparently healthy lower left leg amputated. Although details about the case are sketchy, the would-be amputee appears to have desired the amputation on the grounds that his left foot wasn’t part of him – it felt alien. After consultation with psychiatrists, Smith performed the amputation. Two and a half years later, the patient reported that his life had (...) been transformed for the better by the operation . A second patient was also reported as having been satisfied with his amputation . (shrink)
[p. 45] I wish to represent a certain subclass of nonconventional implicatures, which I shall call CONVERSATIONAL implicatures, as being essentially connected with certain general features of discourse; so my next step is to try to say what these features are. The following may provide a first approximation to a general principle. Our talk exchanges do not normally consist of a succession of disconnected remarks, and would not be rational if they did. They are characteristically, to some degree at least, (...) cooperative efforts; and each participant recognizes in them, to some extent, a common purpose or set of purposes, or at least a mutually accepted direction. This purpose or direction may be fixed from the start (e.g., by an initial proposal of a question for discussion), or it may evolve during the exchange; it may be fairly definite, or it may be so indefinite as to leave very considerable latitude to the participants (as in a casual conversation). But at each stage, SOME possible conversational moves would be excluded as conversationally unsuitable. We might then formulate a rough general principle which participants will be expected (ceteris paribus) to observe, namely: Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged. One might label this the COOPERATIVE PRINCIPLE. On the assumption that some such general principle as this is acceptable, one may perhaps distinguish four categories under one or another of which will fall certain more specific maxims and submaxims, the following of which will, in general, yield results in accordance with the Cooperative Principle. Echoing Kant, I call these categories Quantity, Quality, Relation, and Manner. The category of QUANTITY relates to the quantity of information to be provided, and under it fall the following maxims. (shrink)
[Jennifer Hornsby] The central claim is that the semantic knowledge exercised by people when they speak is practical knowledge. The relevant idea of practical knowledge is explicated, applied to the case of speaking, and connected with an idea of agents' knowledge. Some defence of the claim is provided. /// [Jason Stanley] The central claim is that Hornsby's argument that semantic knowledge is practical knowledge is based upon a false premise. I argue, contra Hornsby, that speakers do not voice their thoughts (...) directly. Rather, our actions of voicing our thoughts are justified by decisions we make (albeit rapidly) about what words to use. Along the way, I raise doubts about other aspects of the thesis that semantic knowledge is practical knowledge. (shrink)
This paper comments on Gallagher’s recently published direct perception proposal about social cognition [Gallagher, S. (2008a). Direct perception in the intersubjective context. Consciousness and Cognition, 17(2), 535–543]. I show that direct perception is in danger of being appropriated by the very cognitivist accounts criticised by Gallagher (theory theory and simulation theory). Then I argue that the experiential directness of perception in social situations can be understood only in the context of the role of the interaction process in social cognition. I (...) elaborate on the role of social interaction with a discussion of participatory sense-making to show that direct perception, rather than being a perception enriched by mainly individual capacities, can be best understood as an interactional phenomenon. (shrink)
Champions of virtue ethics frequently appeal to moral perception: the notion that virtuous people can “see” what to do. According to a traditional account of virtue, the cultivation of proper feeling through imitation and habituation issues in a sensitivity to reasons to act. Thus, we learn to see what to do by coming to feel the demands of courage, kindness, and the like. But virtue ethics also claims superiority over other theories that adopt a perceptual moral epistemology, such as intuitionism (...) – which John McDowell criticizes for illicitly “borrow[ing] the epistemological credentials” of perception. In this paper, I suggest that the most promising way for virtue ethics to use perceptual metaphors innocuously is by adopting a skill model of virtue, on which the virtues are modeled on forms of practical know-how. Yet I contend that this model is double-edged for virtue ethics. The skill model belies some central ambitions and dogmas of the traditional view, especially its most idealized claims about virtue and the virtuous. While this may be a cost that its champions are unprepared to pay, I suggest that virtue ethics would do well to embrace a more realistic moral psychology and a correspondingly less sublime conception of virtue. (shrink)
The relationship between Employer and Employees is a central one in the world of business. While an important relationship, it is one that is often a source of tension for the workplace. Employers are seemingly in constant mistrust of workers, while workers often look upon their bosses as "less than competent". In the American world of business today, should this "adversarial" relationship continue or should the Employer–Employee Relationship be governed by different rules. Immanuel Kant's Categorical Imperative offers some insights into (...) the way this relationship should be viewed. Also, the philosopher Alfred North Whitehead has some important points to add to the discussion of this crucial business relationship. A look at the case involving Malden Mills Textile Plant and its CEO Aaron Feuerstein will be used to launch this discussion. (shrink)
At the beginning of Die Grundlagen der Arithmetik (§2) , Frege observes that “it is in the nature of mathematics to prefer proof, where proof is possible”. This, of course, is true, but thinkers differ on why it is that mathematicians prefer proof. And what of propositions for which no proof is possible? What of axioms? This talk explores various notions of self-evidence, and the role they play in various foundational systems, notably those of Frege and Zermelo. I argue that (...) both programs are undermined at a crucial point, namely when self-evidence is supported by holistic and even pragmatic considerations. (shrink)
Some omissions seem to be causes. For example, suppose Barry promises to water Alice’s plant, doesn’t water it, and that the plant then dries up and dies. Barry’s not watering the plant – his omitting to water the plant – caused its death. But there is reason to believe that if omissions are ever causes, then there is far more causation by omission than we ordinarily think. In other words, there is reason to think the following thesis true.
Is morality rational? In this book Gauthier argues that moral principles are principles of rational choice. He proposes a principle whereby choice is made on an agreed basis of cooperation, rather than according to what would give an individual the greatest expectation of value. He shows that such a principle not only ensures mutual benefit and fairness, thus satisfying the standards of morality, but also that each person may actually expect greater utility by adhering to morality, even though the choice (...) did not have that end primarily in view. In resolving what may appear to be a paradox, the author establishes morals on the firm foundation of reason. Gauthier's argument includes an account of value, linking it to preference and utility; a discussion of the curcumstances in which morality is unnecessary; and an application of morals by agreement to relations between peoples at different levels of development and different generations. Finally, he reflects on the assumptions about individuality and community made by his account of rationality and morality. (shrink)
It seems beyond doubt that a thinker can come to know a conclusion by deducing it from premisses that he knows already, but philosophers have found it puzzling how a thinker could acquire knowledge in this way. Assuming a broadly externalist conception of knowledge, I explain why judgements competently deduced from known premisses are themselves knowledgeable. Assuming an exclusionary conception of judgeable content, I further explain how such judgements can be informative. (According to the exclusionary conception, which I develop from (...) some remarks in Ramsey, a judgement's content is given by the hitherto live possibilities that it excludes or rules out.) I propose that the value of logic lies in its allowing us to combine different sources of knowledge, so that we can learn things that we could not learn from those sources individually. I conclude by arguing that while single-conclusion logics possess that value, multiple-conclusion logics do not. (shrink)
Experimental philosophy is a new and somewhat controversial method of philosophical inquiry in which philosophers conduct experiments in order to shed light on issues of philosophical interest. This typically involves surveying ordinary people to find out their "intuitions" (roughly, pre-theoretical judgments) about hypothetical cases important to philosophical theorizing. The controversy surrounding this methodology arises largely because it departs from more traditional ways of doing philosophy. Moreover, some of its practitioners have used it to argue that the more traditional methods are (...) flawed. In Experimental Philosophy, Joshua Knobe and Shaun Nichols are set with the task of introducing readers to this burgeoning field by putting together a collection of some of its most important articles. Given how controversial it has become, this is a heavy burden. I'm happy to say that they have put together a valuable collection that serves as a diplomatic introduction to this exciting new style of research. (shrink)
As I use the term, ‘entitlement’ is any warrant one has by default—i.e. without acquiring it. Some philosophers not only affirm the existence of entitlement, but also give it a crucial role in the justification of our perceptual beliefs. These philosophers affirm the Entitlement Thesis: An essential part of what makes our perceptual beliefs justified is our entitlement to the proposition that I am not a brain-in-a-vat. Crispin Wright, Stewart Cohen, and Roger White are among those who endorse this controversial (...) claim. In this paper, I argue that the Entitlement Thesis is false. (shrink)
Acrobat version This book In Defense of Animals ] provides a platform for the new animal liberation movement. A diverse group of people share this platform: university philosophers, a zoologist, a lawyer, militant activists who are ready to break the law to further their cause, and respected political lobbyists who are entirely at home in parliamentary offices. Their common ground is that they are all, in their very different ways, taking part in the struggle for animal liberation. This struggle is (...) a new phenomenon. It marks an expansion of our moral horizons beyond our own species and is thus a significant stage in the development of human ethics. The aim of this introduction is to show why the movement is so significant, first by contrasting it with earlier movements against cruelty for animals, and then by setting out the distinctive ethical stance which lies behind the new movement. (shrink)
Advocates of the "strong programme" in the sociology of knowledge have argued that, because scientific theories are "underdetermined" by data, sociological factors must be invoked to explain why scientists believe the theories they do. I examine this argument, and the responses to it by J.R. Brown (1989) and L. Laudan (1996). I distinguish between a number of different versions of the underdetermination thesis, some trivial, some substantive. I show that Brown's and Laudan's attempts to refute the sociologists' argument fail. Nonetheless, (...) the sociologists' argument falls to a different criticism, for the version of the underdetermination thesis that the argument requires, has not been shown to be true. (shrink)
If the import of a book can be assessed by the problem it takes on, how that problem unfolds, and the extent of the problem’s fruitfulness for further exploration and experimentation, then Duffy has produced a text worthy of much close attention. Duffy constructs an encounter between Deleuze’s creation of a concept of difference in Difference and Repetition (DR) and Deleuze’s reading of Spinoza in Expressionism in Philosophy: Spinoza (EP). It is surprising that such an encounter has not already been (...) explored, at least not to this extent and in this much detail. Since the two works were written simultaneously, as Deleuze’s primary and secondary dissertations, it is to be expected that there is much to learn from their interaction. Duffy proceeds by explicating, in terms of the differential calculus, a logic of what Deleuze in DR calls different/ciation, and then maps this onto Deleuze’s account of modal expression in EP. (shrink)
In Part III of his Remarks on the Foundations of Mathematics Wittgenstein deals with what he calls the surveyability of proofs. By this he means that mathematical proofs can be reproduced with certainty and in the manner in which we reproduce pictures. There are remarkable similarities between Wittgenstein's view of proofs and Hilbert's, but Wittgenstein, unlike Hilbert, uses his view mainly in critical intent. He tries to undermine foundational systems in mathematics, like logicist or set theoretic ones, by stressing the (...) unsurveyability of the proof-patterns occurring in them. Wittgenstein presents two main arguments against foundational endeavours of this sort. First, he shows that there are problems with the criteria of identity for the unsurveyable proof-patterns, and second, he points out that by making these patterns surveyable, we rely on concepts and procedures which go beyond the foundational frameworks. When we take these concepts and procedures seriously, mathematics does not appear as a uniform system, but as a mixture of different techniques. (shrink)
Kuhn made two attempts at providing an evolutionary analogy for scientific change. The first attempt, in The Structure of Scientific Revolutions , is very brief and unstructured; in this article I discuss some of its weaknesses. Alexander Bird takes this attempt more seriously and provides a criticism based on oversimplified evolutionary assumptions. These assumptions prove to be inadequate for the second, more articulate, evolutionary analogy suggested by Kuhn in “The Road since Structure.” I argue, however, that this second Kuhnian attempt (...) is undermined by his inadequate view of biological progress and by his misunderstanding of the concept of ecological niche. *Received April 2008. †To contact the author, please write to: School of Politics, International Studies, and Philosophy, Queen’s University Belfast, 21 University Square, Belfast, BT7 1PA Northern Ireland; e‐mail: firstname.lastname@example.org. (shrink)
At the philosophical foundations of our best and deepest theory of the structure of reality, namely quantum mechanics, there is an intellectual scandal that reflects badly on most of this century’s leading physicists and philosophers of physics. One way of making the nature of the scandal plain is simply to observe that this paper  by Lockwood is untainted by it. Lockwood gives us an up to date investigation of metaphysics, and discusses the implications of quantum theory for some of (...) the bread and butter concepts of philosophy, such as reality, the self and causality. The scandal is that there is very little other work of that description in the literature, and what little there is, is systematically disregarded by mainstream thinking in both philosophy and physics. Despite the unrivalled empirical success of quantum theory, the very suggestion that it may be literally true as a description of nature is still greeted with cynicism, incomprehension and even anger. (shrink)
I discuss two ways in which emotions explain actions: in the first, the explanation is expressive; in the second, the action is not only explained but also rationalized by the emotion's intentional content. The belief-desire model cannot satisfactorily account for either of these cases. My main purpose is to show that the emotions constitute an irreducible category in the explanation of action, to be understood by analogy with perception. Emotions are affective perceptions. Their affect gives them motivational force, and they (...) can rationalize actions because, like perception, they have a representational intentional content. Because of this, an emotion can non-inferentially justify a belief which in its turn justifies or rationalizes an action; so emotions may constitute a source of moral knowledge. (shrink)
Multiculturalism requires sustained and serious philosophical reflection, which in turn requires public outreach and communication. This piece briefly outlines concerns raised by the philosophy of multiculturalism and, conversely, multiculturalism in philosophy, which ultimately force us to reconsider the philosopher’s own role and responsibility. I conclude with a provocative suggestion of philosophy as /public diplomacy/. (As this is intended to be a piece for a general audience, secondary literature is only referred to in the conclusion. References gladly provided upon request.).
According to the standard story (a) W. V. Quine’s criticisms of the idea that logic is true by convention are directed against, and completely undermine, Rudolf Carnap’s idea that the logical truths of a language L are the sentences of L that are true-in- L solely in virtue of the linguistic conventions for L , and (b) Quine himself had no interest in or use for any notion of truth by convention. This paper argues that (a) and (b) are both (...) false. Carnap did not endorse any truth-by-convention theses that are undermined by Quine’s technical observations. Quine knew this. Quine’s criticisms of the thesis that logic is true by convention are not directed against a truth-by-convention thesis that Carnap actually held, but are part of Quine’s own project of articulating the consequences of his scientific naturalism. Quine found that logic is not true by convention in any naturalistically acceptable sense. But he also observed that in set theory and other highly abstract parts of science we sometimes deliberately adopt postulates with no justification other than that they are elegant and convenient. For Quine such postulations constitute a naturalistically acceptable and fallible sort of truth by convention. It is only when an act of adopting a postulate is not indispensible to natural science that Quine sees it as affording truth by convention ‘unalloyed’. A naturalist who accepts Quine’s notion of truth by convention is therefore not limited (as naturalists are often thought to be) to accepting only those postulates that she regards as indispensible to natural science. (shrink)
Two concepts of utmost importance for the analytic philosophy of the twentieth century, “sense-data” and “knowledge by acquaintance”, were introduced by Bertrand Russell under the influence of two idealist philosophers: F. H. Bradley and Alexius Meinong. This paper traces the exact history of their introduction. We shall see that between 1896 and 1898, Russell had a fully-elaborated theory of “sense-data”, which he abandoned after his analytic turn of the summer of 1898. Furthermore, following a subsequent turn of August 1900—-after he (...) became acquainted with the works of Peano and later of Frege—-Russell gradually developed another theory of sense-data. With the collaboration of G. E. Moore, Russell reintroduced the term “sense-data” in 1911. Concomitantly with this move, Russell introduced the epistemological term “knowledge by acquaintance”, which came to designate the grasping of sense-data and universals. (shrink)
The paradigmatic assumption that REM sleep is the physiological equivalent of dreaming is in need of fundamental revision. A mounting body of evidence suggests that dreaming and REM sleep are dissociable states, and that dreaming is controlled by forebrain mechanisms. Recent neuropsychological, radiological, and pharmacological findings suggest that the cholinergic brain stem mechanisms that control the REM state can only generate the psychological phenomena of dreaming through the mediation of a second, probably dopaminergic, forebrain mechanism. The latter mechanism (and thus (...) dreaming itself) can also be activated by a variety of nonREM triggers. Dreaming can be manipulated by dopamine agonists and antagonists with no concomitant change in REM frequency, duration, and density. Dreaming can also be induced by focal forebrain stimulation and by complex partial (forebrain) seizures during nonREM sleep, when the involvement of brainstem REM mechanisms is precluded. Likewise, dreaming is obliterated by focal lesions along a specific (probably dopaminergic) forebrain pathway, and these lesions do not have any appreciable effects on REM frequency, duration, and density. These findings suggest that the forebrain mechanism in question is the final common path to dreaming and that the brainstem oscillator that controls the REM state is just one of the many arousal triggers that can activate this forebrain mechanism. The “REM-on” mechanism (like its various NREM equivalents) therefore stands outside the dream process itself, which is mediated by an independent, forebrain “dream-on” mechanism. Key Words: acetylcholine; brainstem; dopamine; dreaming; forebrain; NREM; REM; sleep. (shrink)
The SIMS model claims that it is by means of an embodied simulation that we determine the meaning of an observed smile. This suggests that crucial interpretative work is done in the mapping that takes us from a perceived smile to the activation of one's own facial musculature. How is this mapping achieved? Might it depend upon a prior interpretation arrived at on the basis of perceptual and contextual information?
On the 27th of October, 1949, the Department of Philosophy at the University of Manchester organized a symposium "Mind and Machine", as Michael Polanyi noted in his Personal Knowledge (1974, p. 261). This event is known, especially among scholars of Alan Turing, but it is scarcely documented. Wolfe Mays (2000) reported about the debate, which he personally had attended, and paraphrased a mimeographed document that is preserved at the Manchester University archive. He forwarded a copy to Andrew Hodges and B. (...) Jack Copeland, who in then published it on their respective websites. The basis of this interpretation here is the copy preserved in the Regenstein Library of the University of Chicago, Special Collections, Polanyi Collection (abbreviated RPC, box 22, folder 19). The same collection holds the mimeographed statement that Polanyi prepared for this symposium: "Can the mind be represented by a machine?" This text has not been studied by Polanyi scholars. (shrink)
The central claim is that the semantic knowledge exercised by people when they speak is practical knowledge. The relevant idea of practical knowledge is explicated, applied to the case of speaking, and connected with an idea of agents’ knowledge. Some defence of the claim is provided.
My project in this paper is to extend the interventionist analysis of causation to give an account of causation in psychology. Many aspects of empirical investigation into psychological causation fit straightforwardly into the interventionist framework. I address three problems. First, the problem of explaining what it is for a causal relation to be properly psychological rather than merely biological. Second, the problem of rational causation: how it is that reasons can be causes. Finally, I look at the implications of an (...) interventionist analysis for the idea that an inquiry into psychological causes must be an inquiry into causal mechanisms. I begin by setting out the main ideas of the interventionist approach. (shrink)
Under free institutions the exercise of human reason leads to a plurality of reasonable, yet irreconcilable doctrines. Rawls's political liberalism is intended as a response to this fundamental feature of modern democratic life. Justifying coercive political power by appeal to any one (or sample) of these doctrines is, Rawls believes, oppressive and illiberal. If we are to achieve unity without oppression, he tells us, we must all affirm a public political conception that is supported by these diverse reasonable doctrines. The (...) first part of this essay argues that the free use of human reason leads to reasonable pluralism over most of what we call the political. Rawls's notion of the political does not avoid the problem of state oppression under conditions of reasonable pluralism. The second part tries to show how justificatory liberalism provides (1) a conception of the political that takes seriously the fact that the free use of human reason leads us to sharply disagree in the domain of the political while (2) articulating a conception of the political according to which the coercive intervention of the state must be justified by public reasons. (shrink)
The question before us is "Can there be an objective morality without God?" By the term "God" we shall mean the God in whom Christians believe, the God of the Bible, not some abstract Higher Power or New Age deity. Dr. Chamberlain believes that the biblical God exists, and that if he didn't exist, there could be no objective moral truths. For myself, I once believed in such a God, but no longer do. My non-belief, however, doesn't mean that I (...) am a moral nihilist, denying that statements about right and wrong are ever objectively true. On the contrary I will argue that there can be objective ethics in the absence of any god whatever. And I'll argue, further, that the existence of objective moral truths actually requires the non-existence of such a God. (shrink)
My paper examines a vital but neglected aspect of Frank Sibley's pioneering account of aesthetic concepts. This is the claim that many aesthetic qualities are such that they can be characterized adequately only by metaphors or ‘quasi-metaphors’. Although there is no indication that Sibley embraced it, I outline a radical, minimalist conception of the experience of perceiving an item as possessing an aesthetic quality, which, I believe, has wide application and which would secure Sibley's position for those aesthetic qualities that (...) conform to it. (shrink)
I am going to begin today by bringing together one of the themes of Carol Voeller’s remarks with one of the criticisms raised by Rachel Cohon, because I see them as related, and want to address them together. Voeller argues that the moral law is constitutive of our nature as rational agents. To put it in her own words, “to be the kind of object it is, is for a thing to be under, or constituted by, the laws which are (...) its nature. For Kant, laws are constitutive principles … in something very close to an Aristotelian sense: for Kant, laws are proper to objects1 much as form is to object, for Aristotle.” Voeller believes that the moral law defines the kind of cause that we are, and we are under the moral law because we are that kind of cause. Since the defining quality of a rational agent is that a rational agent acts on its representation - I prefer to say conception - of a law, Voeller thinks the question for Kant is whether we can find a law which just is the law for causes that act on their representations of laws. As she puts it, “The problem, for Kant, is whether there is a law of a cause that acts on norms - on reflection, on its representation of a law. If there is, then the constitutive principle of that cause will be the law normative for it in reflection.” Now Voeller appears to think that I will disagree with this strategy for grounding the moral law, because she sees me as giving an anti-metaphysical or ametaphysical account of Kant’s ethics, in contrast to Kant’s own. But so far, I don’t.. (shrink)
The aim of this paper is to analyze whether a number of firm and industry characteristics, as well as media exposure, are potential determinants of corporate social responsibility (CSR) disclosure practices by Spanish listed firms. Empirical studies have shown that CSR disclosure activism varies across companies, industries, and time (Gray et al., Accounting, Auditing & Accountability Journal 8(2), 47–77, 1995; Journal of Business Finance & Accounting 28(3/4), 327–356, 2001; Hackston and Milne, Accounting, Auditing & Accountability Journal 9(1), 77–108, 1996; Cormier (...) and Magnan, Journal of International Financial Management and Accounting 1(2), 171–195, 2003; Cormier et al., European Accounting Review 14(1), 3–39, 2005), which is usually justified by reference to several theoretical constructs, such as the legitimacy, stakeholder, and agency theories. Our findings evidence that firms with higher CSR ratings present a statistically significant larger size and a higher media exposure, and belong to more environmentally sensitive industries, as compared to firms with lower CSR ratings. However, neither profitability nor leverage seem to explain differences in CSR disclosure practices between Spanish listed firms. The most influential variable for explaining firms’ variation in CSR ratings is media exposure, followed by size and industry. Therefore, it seems that the legitimacy theory, as captured by those variables related to public or social visibility, is the most relevant theory for explaining CSR disclosure practices of Spanish listed firms. (shrink)
This paper addresses the question whetherintrospection plus externalism about mental contentwarrant an a priori refutation of external-worldskepticism and ontological solipsism. The suggestionis that if thought content is partly determined byaffairs in the environment and if we can havenon-empirical knowledge of our current thoughtcontents, we can, just by reflection, know about theworld around us â we can know that our environment ispopulated with content-determining entities. Afterexamining this type of transcendental argument anddiscussing various objections found in the literature,I argue that the notion (...) of privileged self-knowledgeunderlying this argument presupposes that we canlearn, via introspection, that our so-called thoughtsare propositional attitudes rather than contentlessstates. If, however, externalism is correct andthought content consists in the systematic dependencyof internal states on relational properties, we cannotknow non-empirically whether or not we havepropositional attitudes. Self-knowledge (apropositional attitude) is consistent with us lackingthe ability to rule out, via introspection, thepossibility that we don't have any propositionalattitudes. Self-knowledge provides us with knowledgeof what is in our minds, but not that we haveminds. Hence, the combination of externalism with thedoctrine of privileged self-knowledge does not allowfor an a priori refutation of skepticism and istherefore unproblematic. (shrink)
Some medical services have long generated deep moral controversy within the medical profession as well as in broader society and have led to conscientious refusals by some physicians to provide those services to their patients. More recently, pharmacists in a number of states have refused on grounds of conscience to fill legal prescriptions for their customers. This paper assesses these controversies. First, I offer a brief account of the basis and limits of the claim to be free to act on (...) one’s conscience. Second, I sketch an account of the basis of the medical and pharmacy professions’ responsibilities and the process by which they are specified and change over time. Third, I then set out and defend what I call the “conventional compromise” as a reasonable accommodation to conflicts between these professions’ responsibilities and the moral integrity of their individual members. Finally, I take up and reject the complicity objection to the conventional compromise. Put together, this provides my answer to the question posed in the title of my paper: “Conscientious refusal by physicians and pharmacists: who is obligated to do what, and why?”. (shrink)
Consider the paradox of altruism: the existence of truly altruistic behaviors is difﬁcult to reconcile with an evolutionary theory which holds that natural selection operates only on individuals, since in that case individuals should be unwilling to sacriﬁce their own ﬁtness for the sake of others. Evolutionists have frequently turned to the hypothesis of group selection to explain the existence of altruism; but, even setting aside difﬁculties about understanding the relationship between altruistic behaviors and morality, group selection cannot explain the (...) evolution of morality, since morality is a one-group phenomenon and group selection is a many-group phenomenon. After spelling out just what the problem is, this paper discusses several ways out and concludes by offering suggestions why one seems best. (shrink)
This book covers a vast amount of material in the philosophy of mind, which makes it difficult to do justice to its tightly argued and nuanced details. It does, however, have two overarching goals that are visible, so to speak, from space. In the first half of the book Kirk aims to show that, contra his former self, philosophical zombies are not conceivable. By this he means that the zombie scenario as usually constructed contains an unnoticed contradiction, and explaining the (...) contradiction reveals a radical misconception about the nature of phenomenal consciousness. His second aim of the book is to construct a theory of perceptual-phenomenal consciousness that avoids this contradiction. (shrink)
The debate between the reductive and emergent materialist is still very much a live one. (Antony and Levine 1997; Auyang 2000; Bechtel and Richardson 1992; Block 1997; Boyd 1999; Crane 2001; David 1997; Fodor 1989; Fodor 1997; Kim 1993b; Kim 1994; Kim 1996; Kim 1999; Le Pore and Loewer 1987; Millikan 1999; Pereboom 2002; Rueger 2000; Van Gulick 2001; Yablo 1992). We argue that the best way to settle this debate is to take a step back and consider the metaphysics (...) that is motivated by a careful consideration of some scientific examples. We argue that an account of emergence which bases emergence of a complex whole in the physical organisation of the parts can account for the emergent explicable novelty can be found throughout science. This. (shrink)
In this article, I argue that if one closely follows Hobbes' line of reasoning in Leviathan, in particular his distinction between the second and the third law of nature, and the logic of his contractarian theory, then Hobbes' state of nature is best translated into the language of game theory by an assurance game, and not by a one-shot or iterated prisoner's dilemma game, nor by an assurance dilemma game. Further, I support Hobbes' conclusion that the sovereign must always punish (...) the Foole, and even exclude her from the cooperative framework or take her life, if she defects once society is established, which is best expressed in the language of game theory by a grim strategy. That is, compared to existing game-theoretic interpretations of Hobbes, I argue that the sovereign plays a grim strategy with the citizens once society is established, and not the individuals with one another in the state of nature. (shrink)
This chapter defends the positive thesis which constitutes its title. It argues first, that the mind has been shaped by natural selection; and second, that the result of that shaping process is a modular mental architecture. The arguments presented are all broadly empirical in character, drawing on evidence provided by biologists, neuroscientists and psychologists (evolutionary, cognitive, and developmental), as well as by researchers in artificial intelligence. Yet the conclusion is at odds with the manifest image of ourselves provided both by (...) introspection and by common-sense psychology. The chapter concludes by sketching how a modular architecture might be developed to account for the patently unconstrained character of human thought, which has served as an assumption in a number of recent philosophical attacks on mental modularity. (shrink)
Kalam cosmological arguments have recently been the subject of criticisms, at least inter alia, by physicists---Paul Davies, Stephen Hawking---and philosophers of science---Adolf Grunbaum. In a series of recent articles, William Craig has attempted to show that these criticisms are “superficial, iII-conceived, and based on misunderstanding.” I argue that, while some of the discussion of Davies and Hawking is not philosophically sophisticated, the points raised by Davies, Hawking and Grunbaum do suffice to undermine the dialectical efficacy of kalam cosmological arguments.
In this paper, I challenge a widely presupposed principle in the epistemology of inference. The principle, (Validity Requirement), is this: S’s (purportedly deductive) reasoning, R, from warranted premise-beliefs provides (conditional) warrant for S’s belief in its conclusion only if R is valid. I argue against (Validity Requirement) from two prominent assumptions in the philosophy of mind: that the cognitive competencies that constitute reasoning are fallible, and that the attitudes operative in reasoning are anti-individualistically individuated. Indeed, my discussion will amount to (...) a defence of anti-individualism against a novel ‘slow-switch’ argument against it. This argument contra anti-individualism has it that given anti-individualism and certain auxiliary assumptions, A, a switched reasoner may, in certain slow-switch circumstances, C, reason invalidly by equivocating concepts. More specifically: -/- (Valid 0): Peter is in circumstances C, and auxiliary assumptions, A, hold.(Valid 1): If Peter is in circumstances C, and auxiliary assumptions A hold, then (if the attitudes operative in Peter’s reasoning R are anti-individualistically individuated, then R is not valid). (Valid 2): Peter’s reasoning, R, generates warrant for the conclusion-belief. (Valid 3): Peter’s reasoning, R, generates warrant for the conclusion-belief only if the reasoning, R, is valid. (Valid 4): So, the attitudes operative in Peter’s reasoning R are not anti-individualistically individuated. -/- The argument involves weaker premises than those of familiar slow-switch arguments against anti-individualism. In particular, it requires only that the reasoning be de facto valid. This assumption is much weaker than the requirement that the validity of the reasoning be ‘transparent’ to the reasoner. Indeed, (Valid 3) is simply an instance of (Validity Requirement). However, I argue that anti-individualism and (Valid 0)–(Valid 2) should be upheld at the expense of (Valid 3). In consequence, (Validity Requirement) stands in need of restriction. Thus, I argue for a surprising result in the epistemology of inference from widely accepted assumptions in the philosophy of mind. (shrink)
An evolutionary point of view is proposed to make more appropriate distinctions between experience, awareness and consciousness. Experience can be defined as a characteristic linked closely to specific pattern matching, a characteristic already apparent at the molecular level at least. Awareness can be regarded as the special experience of one or more central, final modules in the animal neuronal brain. Awareness is what experience is to animals.Finally, consciousness could be defined as reflexive awareness. The ability for reflexive awareness is distinctly (...) different from animal and human awareness and depends upon the availability of a separate frame of reference, as provided by symbolic language. As such, words have made reflexive awareness. (shrink)
The physical and/or intrinsic connection approach to causation has become prominent in the recent literature, with Salmon, Dowe, Menzies, and Armstrong among its leading proponents. I show that there is a type of causation, causation by disconnection, with no physical or intrinsic connection between cause and effect. Only Hume-style conditions approaches and hybrid conditions-connections approaches appear to be able to handle causation by disconnection. Some Hume-style, extrinsic, absence-relating, necessary and/or sufficient condition component of the causal relation proves to be needed.
To subscribe to the embodied mind (or embodiment) framework is to reject the view that an individual’s mind is realized by her brain alone. As Clark ( 2008a ) has argued, there are two ways to subscribe to embodiment: bodycentrism (BC) and the extended mind (EM) thesis. According to BC, an embodied mind is a two-place relation between an individual’s brain and her non-neural bodily anatomy. According to EM, an embodied mind is a threeplace relation between an individual’s brain, her (...) non-neural body and her non-bodily environment. I argue that BC can be given a weak and a strong interpretation, according to whether it accepts a functionalist account of the contribution of the non-neural body to higher cognitive functions and a computational account of the contents of concepts and the nature of conceptual processing. Thus, weak BC amounts to an incomplete version of EM. To accept a weak BC approach to concepts is to accept concept-empiricism. I raise four challenges for concept-empiricism and argue that what is widely taken as evidence for concept-empiricism from recent cognitive neuroscience could only vindicate weak BC if it could be shown that the non-neural body, far from being a tool at the service of the mind/brain, could be constitutive of the mind. If correct, EM would seem able to vindicate the claim that both bodily and non-bodily tools are constitutive of an individual’s mind. I scrutinize the basic arguments for EM and argue that they fail. This failure backfires on weak BC. One option left for advocates of BC is to endorse a strong, more controversial, BC approach to concepts. (shrink)
When software is written and then utilized in complex computer systems, problems often occur. Sometimes these problems cause a system to malfunction, and in some instances such malfunctions cause harm. Should any of the persons involved in creating the software be blamed and punished when a computer system failure leads to persons being harmed? In order to decide whether such blame and punishment are appropriate, we need to first consider if the people are “morally responsible”. Should any of the people (...) involved in creating the software be held morally responsible, as individuals, for the harm caused by a computer system failure?This article provides one view of moral responsibility and then discusses some barriers to holding people morally responsible. Next, it provides information about the Therac-25, a computer-controlled medical linear accelerator, and its computer systems failures that led to deaths and injuries. Finally it investigates whether two key people involved in the Therac-25 case could reasonably be considered to have some degree of moral responsibility for the deaths and injuries. The conclusions about whether or not these people were morally responsible necessarily rest upon a certain amount of speculation about what they knew and what they did. These limitations, however, should not cause us to conclude that discussions of moral responsibility are fruitless. In some cases, determinations of moral responsibility may be made and in others the investigation is still worthwhile, as the article demonstrates. (shrink)
1. Naturalism Naturalism, it has been said, is the distinctive development in philosophy over the last thirty years. There has been a naturalistic turn away from the a priori methods of traditional philosophy to a conception of philosophy as continuous with natural science. The doctrine has been extensively discussed and has won considerable following in the USA. This is, on the whole, not true of Britain and continental Europe, where the pragmatist tradition never took root, and the temptations of scientism (...) in philosophy were less alluring. Contemporary American naturalism originates in the writings of Quine, the metaphysician of twentieth-century science. With extraordinary panache, he painted a largescale picture of human nature, of language and of the web of belief. I believe that in almost every major respect, it is, like the picture painted by Descartes, the great metaphysician of seventeenth-century science, mistaken. But it evidently appeals to the spirit of the times. So it is worthy of critical examination and careful refutation. I shall argue that the naturalistic turn is a cul-de-sac – a turn that is to be passed by if we are to keep to the highroad of good sense. Naturalism, like so many of Quine’s doctrines, was propounded in response to Carnap. As Quine understood matters, Carnap had been persuaded by Russell’s Our Knowledge of the External World that it is the task of philosophy to demonstrate that our knowledge of the external world is a logical construction out of, and hence can be reduced to, elementary experiences. Quine rejected the reductionism of Carnap’s Logischer Aufbau, and found the idealist basis uncongenial to his own dogmatic realist behaviourism, inspired by Watson and later reinforced by Skinner. The rejection of reductionism and ‘unregenerate realism’, Quine averred, were the sources of his naturalism (FME 72). What exactly was this? We can distinguish in Quine between three different but inter-related programmes for future philosophy: epistemological, ontological and philosophical naturalism. Naturalized epistemology is to displace traditional epistemology, transforming the investigation into ‘an enterprise within natural science’ (NNK 68) – a psychological enterprise of investigating how the ‘input’ of radiation, etc., impinging on the nerve endings of human beings can ‘ultimately’ result in an ‘output’ of our theoretical descriptions of the external world.. (shrink)
This is a review of the book ‘Memory Evolutive Systems; Hierarchy, Emergence, Cognition’, by A. Ehresmann and J.P. Vanbremeersch. I welcome the use of category theory and the notion of colimit as a way of describing how complex hierarchical systems can be organised, and the notion of categories varying with time to give a notion of an evolving system. In this review I also point out the relation of the notion of colimit to ideas of communication; the necessity of communications (...) to be symbolic representations; and the use of an analogy with mathematics to spell out some of the necessities of such a mode of communication to be powerful, robust and efficient. (shrink)
In this paper, I explore the notion of a “causal power”, particularly as it is relevant to a theory of properties whereby properties are individuated by the causal powers they bestow on the objects that instantiate them. I take as my target certain eliminativist positions that argue that certain kinds of properties (or relations) do not exist because they fail to bestow unique causal powers on objects. But the notion of a causal powers is inextricably bound up with our notion (...) of what an event is, and not only is there disagreement as to which theory of events is appropriate, but on the three prevailing theories, it can be shown that the eliminativists arguments do not follow. (shrink)
This paper refutes two important and influential views in one fell stroke. The first is G.E. Moore’s view that assertions of the form ‘Q but I don’t believe that Q’ are inherently “absurd.” The second is Gareth Evans’s view that justification to assert Q entails justification to assert that you believe Q. Both views run aground the possibility of being justified in accepting eliminativism about belief. A corollary is that a principle recently defended by John Williams is also false, namely, (...) that justification to believe Q entails justification to believe that you believe Q. (shrink)
The standard paradigm for mental causation is a person’s acting for a reason. Something happens - she intentionally φ’s - the occurrence of which we explain by citing a relevant belief or desire. In the present context, I simply take for granted the following two conditions on the appropriateness of this explanation. First, the agent φ’s _because_ she believes/desires what we say she does, where this is expressive of a _causal_ dependence.1 Second, her believing/desiring this gives her a _reason_ for (...) φ-ing: recognizing that she has this belief/desire makes her φ-ing intelligible as rational in the light of her other attitudes and circumstances. A further condition must be met, though, if this is to be a genuine psychological explanation, a case of her acting _for_ the reason in question. Consider the following example of Davidson’s (1973, p. 79). An exhausted climber is desperate to rid herself of the weight and danger of holding her partner on a rope; and her sudden realization that simply letting go would achieve this so unnerves her that her grip loosens slightly and he falls. Her releasing him causally depends upon her having this belief and desire, which provide _a_ reason for doing what she does. But this is not _why_ she does it: it would be at best misleading to say that she dropped him, intentionally, because she was fed up with holding his weight, or because she thought that she might otherwise fall. Her letting go does not depend upon her having these reasons in the right way. The reason-giving relation is causally irrelevant. If we are to explain a person’s acting _for_ a reason, then her doing. (shrink)
The paper has two main objectives: first, it presents a new argument against the so-called Anscombe Thesis (if χ φ-s by ψ-ing, then χ's φ-ing = χ's ψ-ing). Second, it develops a proposal about the syntax and semantics of the 'by'-locution.
A new book by Zenon Pylyshyn is always a cause for celebration among philosophers of psychology. While many hard-nosed experimental cognitive scientists are attentive to philosophers’ concerns, Pylyshyn stands alone in the extraordinary efforts he takes to understand, address, and struggle with the philosophical puzzles that the mind, and perception in particular, raises. Pylyshyn’s most recent work, Things and Places: How the Mind Connects with the World, does not disappoint. It is philosophically rich. Indeed, the approach to object perception that (...) Pylyshyn develops in this book takes inspiration from Evans’s (1982) and Perry’s (1979) work on demonstratives and indexicals, draws on Dretskean (1981, 1986, 1988) ideas about representation, and tangles with Strawson (1959), Quine (1992), and Clark (2000, 2004) over how to understand the role of concepts in perception. In short, it is just the kind of book philosophers of psychology should lavishly slather with clotted cream and joyously devour at their next tea party. The main focus of this review will be Pylyshyn’s theory of FINSTs (an acronym for Fingers of INSTantion, for reasons to be soon clarified). FINSTs are the primary subject of the first three chapters of Things and Places, after which they basically disappear for about eighty pages, to reappear in the final and lengthiest fifth chapter, where they are put to use in a speculative (and, to my mind, slightly incredible) explanation of data from mental imagery experiments. The fourth chapter is an engaging polemic against using subjective experience as a source of evidence about psychological processing and, in particular, the danger in assuming that because mental images appear to have spatial properties, they must be represented spatially. This chapter stands alone and would be of interest to followers of the imagery debate or, for that matter, to instructors looking for counter-examples when.. (shrink)
This study compares the Internet (corporate web pages) and annual reports as media of social responsibility disclosure (SRD) and analyses what influences disclosure. It examines SRD on the Internet by Portuguese listed companies in 2004 and compares the Internet and 2003 annual reports as disclosure media. The results are interpreted through the lens of a multi-theoretical framework. According to the framework adopted, companies disclose social responsibility information to present a socially responsible image so that they can legitimise their behaviours to (...) their stakeholder groups and influence the external perception of reputation. Results suggest that a theoretical framework combining legitimacy theory and a resource-based perspective provides an explanatory basis for SRD by Portuguese listed companies. (shrink)
There is currently a significant amount of interest in understanding and developing theories of realization. Naturally arguments have arisen about the adequacy of some theories over others. Many of these arguments have a point. But some can be resolved by seeing that the theories of realization in question are not genuine competitors because they fall under different conceptual traditions with different but compatible goals. I will first describe three different conceptual traditions of realization that are implicated by the arguments under (...) discussion. I will then examine the arguments, from an older complaint by Norman Malcolm against a familiar functional theory to a recent argument by Thomas Polger against an assortment of theories that traffic in inherited causal powers, showing how they can be resolved by situating the theories under their respective conceptual traditions. (shrink)
By what types of properties do we specify twinges, toothaches, and other kinds of mental states? Wittgenstein considers two methods. Procedure one, direct, private acquaintance: A person connects a word to the sensation it specifies through noticing what that sensation is like in his own experience. Procedure two, outward signs: A person pins his use of a word to outward, pre-verbal signs of the sensation. I identify and explain a third procedure and show we in fact specify many kinds of (...) mental states in this way. (shrink)
Libet's experiments, supported by a strict one-to-one identity thesis between brain events and mental events, have prompted the conclusion that physical events precede the mental events to which they correspond. We examine this claim and conclude that it is suspect for several reasons. First, there is a dual assumption that an intention is the kind of thing that causes an action and that can be accurately introspected. Second, there is a real problem with the method of timing the mental events (...) concerned given that Libet himself has found the reports of subjects to be unreliable in this regard. Third, there is a suspect assumption that there are such things as timable and locatable mental and brain events accompanying and causing human behaviour. For all these reasons we reject the claim that physical events are prior to and explain mental events. (shrink)
In this paper I respond to separate criticisms by Bill Shaw (JBE, July 1988) and Richard Nunan (JBE, December 1988) of my paper A Critique of Milton Friedman's Essay The Social Responsibility of Business Is to Increase Its Profits (JBE, August 1986). Professors Shaw and Nunan identify several points where my argument could benefit from clarification and improvement. They also make valuable contributions to the discussion of the broad issue area of whether and to what extent business should exercise moral (...) initiative.My objectives are (1) to show, with the aid of examples (inspired by Shaw) and the addition of one point of correction (inspired by Nunan), that my disapproving critique of Friedman's famous argument remains sound, (2) to show that Professor Shaw's argument contains serious problems, and (3) to build on the base laid by my critics by developing important reasons why business should exercise moral initiative. (shrink)
The standard picture of evolution, is externalist: a causal arrow runs from environment to organism, and that arrow explains why organisms are as they are (Godfrey-Smith 1996). Natural selection allows a lineage to accommodate itself to the specifics of its environment. As the interior of Australia became hotter and drier, phenotypes changed in many lineages of plants and animals, so that those organisms came to suit the new conditions under which they lived. Odling-Smee, Laland and Feldman, building on the work (...) of Richard Lewontin, have shown that while sometimes appropriate, this is an inadequate conception of the relationship between organisms and the environments in which they live. Over time organisms alter their environment as well as being altered by their environments (Lewontin 1982; Lewontin 1983; Lewontin 1985). For example, animals modulate the effects of their physical and biological environment by building shelters: the beaver’s dam and lodge system, and termite mounds are two famous cases of animal structures, but they are few of many. There are many thousands of animals which make nests, burrows and other shelters. Likewise, animals make tools that give them access to resources from which they would otherwise be excluded: thus the Galapagos woodpecker finch uses a cactus needle to extract insects from crevasses in bark — insects that they would otherwise be unable to catch (Tebbich, Taborsky et al. 2001). Tool making is not as common as shelter-making, but it is common. For example many animals make traps: there are many species of pit-making antlions. Thus in part organisms make the world in which they live. They partially construct their own niches. Odling-Smee, Laland and Feldman argue that this has five major and under-appreciated consequences for biological theory. (shrink)
The first code of professional ethics must: (1)be a code of ethics; (2) apply to members of a profession; (3) apply to allmembers of that profession; and (4) apply only to members of that profession. The value of these criteria depends on how we define “code”, “ethics”, and “profession”, terms the literature on professions has defined in many ways. This paper applies one set of definitions of “code”, “ethics”, and “profession” to a part of what we now know of the (...) history of professions, there by illustrating how the choice of definition can alter substantially both our answer to the question of which came first and (more importantly) our understanding of professional codes (and the professions that adopt them). Because most who write on codes of professional ethics seem to take for granted that physicians produced the first professional code, whether the Hippocratic Oath, Percival’s Medical Ethics, the 1847 Code of Ethicsof the American Medical Association (AMA), or some other document, I focus my discussion on these codes. (shrink)
Philosophical inquiries into morality are as old as philosophy, but it may turn out that morality itself is much, much older than that. At least, that is the main thesis of prima- tologist Frans De Waal, who in this short book based on his Tanner Lectures at Princeton, elaborates on what biologists have been hinting at since Darwin’s (1871) book The Descent of Man and Hamilton’s (1963) studies on the evolution of altruism: morality is yet another allegedly human characteristic that (...) turns out to be built over evolutionary time by natural. (shrink)
If Dinesh D'Souza knew just a little bit more philosophy, he would realize how silly he appears when he accuses me of committing what he calls "the Fallacy of the Enlightenment." and challenges me to refute Kant's doctrine of the thing-in-itself. I don't need to refute this; it has been lambasted so often and so well by other philosophers that even self-styled Kantians typically find one way or another of excusing themselves from defending it. And speaking of fallacies, D'Souza contradicts (...) himself within the space of a few paragraphs. If, as he says, Kant showed that we humans "will never know" the universe in itself, then theists couldn't "know that there is a reality greater than, and beyond, that which our senses and our minds can ever comprehend." They may take this on faith, if they wish, but they mustn't claim to know it, on pain of contradiction. We brights see no good reason to join them in their conviction, and they must admit that they see no good reason either. If they did, it wouldn't be purely a matter of faith. (shrink)
In Slaves of the Passions, Mark Schroeder provides a systematic, rigorously argued defense of a Humean theory of reasons for action, taking pains to respond to influential objections to the view. While inspired by Hume, Schroeder makes it clear that he aims to develop a Humean theory, not necessarily one that Hume himself embraced, and for this reason little is said about Hume in the book. One respect in which Schroeder takes himself to be departing from Hume is in developing (...) a normative account. On his reading, Hume held that only beliefs could stand in the reason relation (187, n11), whereas Schroeder, like many contemporary Humeans, holds that actions can as well. He sets out to develop a theory of this .. (shrink)
Because work looms so large in our lives I believe that most of us don't reflect on its importance and significance. For most of us, work is well – work, something we have to do to maintain our lives and pay the bills. I believe, however, that work is not just a part of our existence that can be easily separated from the rest of our lives. Work is not simply about the trading of labor for dollars. Perhaps because we (...) live in a society that markets and hawks the fruits of our labor and not the labor itself, we have forgotten or never really appreciated the fact that the business of work is not simply to produce goods, but also to help produce people. We need work, and as adults we find identity and are identified by the work we do. If this is true then we must be very careful about what we choose to do for a living, for what we do is what we'll become. (shrink)
In the late 1980s there was a series of sensational business scandals in the United Kingdom. There was particular public outrage at the plundering of pension funds by Robert Maxwell, at the failure of auditors to expose the impending bankruptcy of the Bank of Credit and Commerce International, and at the apparently undeserved high pay raises received by senior business executives. The City of London responded by creating a special committee to examine the financial aspects of corporate governance. This paper (...) describes the resulting Code of Best Practice produced by the Cadbury Committee. To reduce the power of executive directors in the boardroom the Code recommends a greater role for non-executive directors, changes in board operations, and a more active role for auditors. The paper reviews the various published reactions to the Cadbury Report, and concludes that the Code is unlikely to halt the incidence of business scandals in the United Kingdom. (shrink)
Internalists about reasons generally insist that if a putative reason, R, is to count as a genuine normative reason for a particular agent to do something, then R must make a rational connection to some desire or interest of the agent in question. If internalism is true, but moral reasons purport to apply to agents independently of the particular desires, interests, and commitments they have, then we may be forced to conclude that moral reasons are incoherent. Richard Joyce (2001) develops (...) an argument along these lines. Against this view, I argue that we can make sense of moral reasons as reasons that apply to, and are capable of motivating, agents independently of their prior interests and desires. More specifically, I argue that moral agents, in virtue of their capacities for empathy and shared intentionality, are sensitive to reasons that do not directly link up with their pre-existing ends. In particular, they are sensitive to, and hence can be motivated by, reasons grounded in the desires, projects, commitments, concerns, and interests of others. Moral reasons are a subset of this class of reasons to which moral agents are sensitive. Thus, moral agents can be motivated by moral reasons, even where such reasons fail to link up to their own pre-existing ends. (shrink)
My purpose here is to examine the question of how the law can be incorporated within morality and how the existence of the law can impinge on our moral rights and duties, a question (or questions) which is a central aspect of the broad question of the relation between law and morality. My conclusions cast doubts on the incorporation thesis, that is, the view that moral principles can become part of the law of the land by incorporation.
This paper brings needed clarity to the influential view that species are cohesive entities held together by gene flow, and then develops an empirical argument against that view: Neglected data suggest gene flow is neither necessary nor sufficient for species cohesion. Implications are discussed. ‡I'm grateful to Rob Wilson, Alex Rueger and Lindley Darden for important comments on earlier drafts, and to Joseph Nagel, Heather Proctor, Ken Bond, members of the DC History and Philosophy of Biology reading group, and audience (...) members at the November 2006 meeting of the PSA, for helpful comments or discussion. Social Sciences and Humanities Research Council of Canada fellowship 752-2005-1208 supported research. †To contact the author, please write to: Philosophy Department, University of Wisconsin–Madison, 5185 Helen C. White Hall, 600 North Park Street, Madison, WI 53706; e-mail: email@example.com. (shrink)
The concept of temporal flow has been attacked both on the grounds that it is logically incoherent, and on the grounds that it conflicts with the theory of relativity. I argue that the charge of incoherence cannot be made to stick: McTaggart's argument commits the fallacy of equivocation, and arguments deployed by Smart and others turn out to be question-begging. But objections arising from relativity, so I claim, have considerably more force than Lucas acknowledges. Moreover, the idea of equating the (...) cosmic time which arises in general relativistic cosmology with a metaphysically preferred space-time foliation, founders on the fact that the Friedmann models are idealisations. Finally, Lucas may be right in claiming that dynamical wave-function collapse, provided it does not propagate superluminally, will define a preferred foliation. But it is arguable that this consideration, so far from supporting Lucas's position, is grounds for rejecting collapse interpretations of quantum mechanics. (shrink)
In 'Benefit, Disability and the Non-Identity Problem', Hallvard Lillehammer uses the case of a couple who chose to have deaf children to argue against the view that impartial perspectives can provide an exhaustive account of the rightness and wrongness of particular reproductive choices. His conclusion is that the traditional approach to the non-identity problem leads to erroneous conclusions about the morality of creating disabled children. This paper will show that Lillehammer underestimates the power of impartial perspectives and exaggerates the ethical (...) force of partial perspectives, which in turn commits him to providing weak justifications for the choice made by the couple in his example case. (shrink)
Many commentators on Alfred Tarski have, following Hartry Field, claimed that Tarski's truth-definition was motivated by physicalism—the doctrine that all facts, including semantic facts, must be reducible to physical facts. I claim, instead, that Tarski did not aim to reduce semantic facts to physical ones. Thus, Field's criticism that Tarski's truth-definition fails to fulfill physicalist ambitions does not reveal Tarski to be inconsistent, since Tarski's goal is not to vindicate physicalism. I argue that Tarski's only published remarks that speak approvingly (...) of physicalism were written in unusual circumstances: Tarski was likely attempting to appease an audience of physicalists that he viewed as hostile to his ideas. In later sections I develop positive accounts of: (1) Tarski's reduction of semantic concepts; (2) Tarski's motivation to develop formal semantics in the particular way he does; and (3) the role physicalism plays in Tarski's thought. (shrink)
"Capital is moved as much and as little by the degradation and final depopulation of the human race, as by the probable fall of the earth into the sun. Apres moi le deluge! is the watchword of every capitalist and of every capitalist nation" (Marx, CAPITAL Vol 1, 380-381).
At least three books struggle to emerge from this volume. One book, at the level of popular science, leads us through the development of physics, from Newton's laws to Bell's inequalities, in order to argue for the relevance of consciousness to the understanding of quantum theory. This is followed by a sketch of an interpretation of quantum mechanics. Interwoven with both is a memoir of Walker's teenage girlfriend, who died of Hodgkin's disease nearly fifty years ago. The theme which holds (...) the volume together is Walker's insistence on the importance of looking beyond materialism. (shrink)
Could we plausibly believe in the fundamental tenets of classical liberalism and, at the same time, support the state’s raising of immigration barriers? The thesis of this paper is that if we accept the main tenets of classical liberalism as essentially correct, we should regard immigration barriers as essentially illegitimate. Considered under ideal conditions, immigration barriers constitute an unjustified infringement on individuals’ ownership rights, since it is difficult to identify a purpose that such an infringement could have that would outweigh (...) the disadvantages created by eliminating important competitive pressures on governments. Considered under nonideal conditions, the problem is, roughly, that immigration barriers cannot be seen as the choice of a lesser evil in the face of either an expected extension of the redistributive state or an expected threat on liberal institutions. On the contrary, since they relax the constraints faced by governments, immigration barriers should be seen as a major contributor in creating the conditions for the perpetuation of the sort of political arrangements that classical liberals resist. If individual sovereignty is to be protected, the sovereignty of the state over a particular territory should not include a prerogative to determine who is to inhabit it. (shrink)
Scot Soames’ new book, What is Meaning, is an important book, both in the issues it raises and in its shortcomings. It is the first serious discussion of meaning (not “semantic content” or some other term of art designed to sidestep the real issue) by a leading analytic philosopher of language in a long while, and its findings lead towards a more realistic understanding of meaning and language.In his account, Soames uses the notion of cognitive event to account for the (...) unity of the proposition, but, crucially, his choice of predication as the centerpiece of this account undermines it. Furthermore, Soames appears oblivious of the existence of empirical and theoretical studies examining the connection between actual cognitive events and linguistic structure - studies that rather point to the irrelevance of the philosophical approach he is adopting. (shrink)
The philosophical relationship that obtains between the work of Merleau-Ponty and Derrida has continued to intrigue and preoccupy many of us despite, or perhaps even partly because of, the fact that Derrida did not accord the work of Merleau-Ponty much attention during his remarkably prolific career. Two relatively recent books of Derrida’s have addressed this gap: Memoirs of the Blind and, more recently, On Touching. However, although Derrida proposes an “entire re-reading” of the later Merleau-Ponty in Memoirs of the Blind, (...) with the clear implication that there are hitherto unaccessed and invaluable resources to be mined in this body of work, I will suggest that the actual reading of Merleau-Ponty propounded in On Touching falls well short of this ambition. While this chapter will raise some critical questions about the interpretation that Derrida offers of Merleau-Ponty in ‘Exemplary Stories of the Flesh: Tangent 3’, including the implication that his work on the senses and intersubjectivity remains mired in theological prejudices, it will also be concerned to examine the transcendental philosophy of time (or philosophy of the contretemps that breaks open time but nonetheless pertains to it) that undergirds and motivates Derrida’s engagement with the philosophies of touch. In this latter respect, I will argue that Derrida’s philosophy is itself ‘touched’ by time, in the peculiar sense of ‘touched’ that connotes affected and wounded. On my reading, his work instantiates an ethics of non-presentist time, an ethics of that time which is the transcendental condition of the present and any event of touch. I ask whether this prevarication on the issue of the transcendental and the ethical is reason to look for a different understanding of both time and the transcendental to Derrida’s, and I end this chapter by once more proposing a dialectic between the disjunctive and conjunctive aspects of time that does not accord any kind of a priori privilege to the one over the other. (shrink)
This paper develops a non-relativist version of contextualism about knowledge. It is argued that a plausible contextualism must take into account three features of our practice of attributing knowledge: (1) knowledge-attributions follow a default-and-challenge pattern; (2) there are preconditions for a belief's enjoying the status of being justified by default (e.g. being orthodox); and (3) for an error-possibility to be a serious challenge, there has to be positive evidence that the possibility might be realized in the given situation. It is (...) argued that standard "semantic" versions of contextualism (e.g. those of Lewis, Cohen, DeRose) fail to take these features into account, which makes them overly hospitable to the sceptic, and that Williams' version of contextualism, although incorporating (1), fails to do justice to (2) and (3). According to the contextualism developed here, although epistemic standards vary with the context, the truth-value of particular knowledge-attributions does not. Contexts here are understood as being constituted by two elements: an epistemic practice (a rule-governed social practice such as a scientific discipline, the law, a craft etc., in which knowledge-claims are evaluated according to specific standards) and the "facts of the matter" (i.e. those facts which, together with the epistemic standards in question, determine which error-possibilities are relevant and thus have to be eliminated for a knowledge-claim to be true). If there are several epistemic practices, and thus several contexts, in which a knowledge-claim can be evaluated, it is the "strictest" practice that counts. In this way, the counterintuitive consequence of other versions of contextualism that the same knowledge-claim can be true in one context, but false in another, can be avoided. At the same time, scepticism can be resisted since even in the "strictest" epistemic practices, error-possibilities become relevant only when backed by positive evidence that they might in fact obtain. (shrink)
Anecdotes have shown that some articles on profitable drugs are constructed by and shepherded through publication by pharmaceutical companies and their agents, whose influence is largely invisible to readers. This is ghost-management, the substantial but unrecognized research, analysis, writing, editing and/or facilitation behind publication. Publicly available documents suggest that these practices extremely widespread affecting up to 40% of clinical trial reports in key periods but it has been unclear how representative these documents are. This article presents the results of an (...) investigative sampling of the self-presentation of publication planning services, and presents this and other evidence of a sizable publication planning industry. Thus different lines of evidence indicate that ghost-management is a common and important phenomenon, strongly affecting the published medical literature in the service of marketing. (shrink)
Bayesian epistemology postulates a probabilistic analysis of many sorts of ordinary and scientific reasoning. Huber () has provided a novel criticism of Bayesianism, whose core argument involves a challenging issue: confirmation by uncertain evidence. In this paper, we argue that under a properly defined Bayesian account of confirmation by uncertain evidence, Huber's criticism fails. By contrast, our discussion will highlight what we take as some new and appealing features of Bayesian confirmation theory. Introduction Uncertain Evidence and Bayesian Confirmation Bayesian Confirmation (...) by Uncertain Evidence: Test Cases and Basic Principles CiteULike Connotea Del.icio.us What's this? (shrink)
Naive truth theory is, roughly, the theory of truth that in classical logic leads to well-known paradoxes (such as the Liar paradox and the Curry paradox). One response to these paradoxes is to weaken classical logic by restricting the law of excluded middle and introducing a conditional not defined from the other connectives in the usual way. In "New Grounds for Naive Truth Theory" (), Steve Yablo develops a new version of this response, and cites three respects in which he (...) deems it superior to a version that I’ve advocated in several papers. I think he’s right that my version was non-optimal in some of these respects (one and a half of them, to be precise); however, Yablo’s own account seems to me to have some undesirable features as well. In this paper I will explore some variations on his account, and end up tentatively advocating a synthesis of his account and mine (one that is somewhat closer to mine than to his). (shrink)
In the last years there has been a great improvement in the development of computational methods for combinatorial chemistry applied to drug discovery. This approach to drug discovery is sometimes called a “rational way” to manage a well known phenomenon in chemistry: serendipity discoveries. Traditionally, serendipity discoveries are understood as accidental findings made when the discoverer is in quest for something else. This ‘traditional’ pattern of serendipity appears to be a good characterization of discoveries where “luck” plays a key role. (...) In this sense, some initial failures in combinatorial chemistry are frequently attributed to a naïf appropriation of a “serendipity model” for discovery (a “serendipity mistake”). In this paper we try to evaluate this statement by criticizing its foundations. It will be suggested that the notion of serendipity has different aspects and that the criticism to the first attempts could be understood as a “serendipity mistake.” We will suggest that “serendipity” strategies, a kind of blind search, can be seen sometimes as a “genuine part” of scientific practice. A discussion will ensue about how this characterization can give us a better understanding of some aspects of serendipity discoveries. (shrink)
Scanlon suggests a buck-passing account of goodness. To say that something is good is not to give a reason to, say, favour it; rather it is to say that there are such reasons. When it comes to wrongness, however, Scanlon rejects a buck-passing account: to say that j ing is wrong is, on his view, to give a sufficient moral reason not to j. Philip Stratton-Lake 2003 argues that Scanlon can evade a redundancy objection against his (Scanlon’s) view of wrongness (...) by adopting a buck-passing account of wrongness. We argue that this manoeuvre does not succeed. Scanlon’s notion of wrongness rests on the idea of a reasonably rejectable principle. As Stratton-Lake points out, Scanlon offers two accounts, one in terms of permission, the other in terms of proscription. The permission account is tricky to formulate. Scanlon’s account (quoted in Stratton-Lake 2003: 71) might suggest any of the following four formulations (where the principles in question are principles ‘governing how one may act’ (Scanlon.. (shrink)
The question of whether corporate social responsibility (CSR) has a positive impact on firm value has been almost exclusively analysed from the perspective of the stock market. We have therefore investigated the relationship between the valuation of Euro corporate bonds and the standards of CSR of mainly European companies for the first time in this article. Generally, the debt market exhibits a considerable weight for corporate finance, for which reason creditors should basically play a significant role in the transmission of (...) CSR into the valuation of financial instruments. Given that socially responsible firms are often regarded as economically more successful and less risky, they should have lower risk premia. The results of the empirical analysis, however, reveal that based on an extensive data panel the risk premium for socially responsible firms – according to the classification by SAM Group – was ceterius paribus higher than for non-socially responsible companies. However, only one case of the models investigated was weakly significant. Thus, largely the relationship has to be classified as marginal; so CSR has apparently not yet been incorporated into the pricing of corporate bonds. (shrink)
Philosophers have not taken the evolution of human beings seriously enough. If they did, argues Peter Munz, many long-standing philosophical problems would be resolved. One of the philosophical consequences of biology is that all the knowledge produced in evolution is a priori established hypothetically by chance mutation and selective retention rather than by observation and intelligent induction. For organisms as embodied theories, selection is natural. For theories as disembodied organisms, it is artificial. Following Karl Popper, the growth of knowledge is (...) seen to be continuous from "the amoeba to Einstein." Philosophical Darwinism brings perspective to contemporary debates. It has far-reaching implications for cognitive science and artificial intelligence, and questions attempts from the field of biology to reduce mental events to neural processes. Most importantly, it provides a rational postmodern alternative to what the author views as the unreasonable postmodern theories of Kuhn, Lyotard, and Rorty. (shrink)
This new edition of William James’s 1909 classic, A Pluralistic Universe reproduces the original text, only modernizing the spelling. The books has been annotated throughout to clarify James’s points of reference and discussion. There is a new, fuller index, a brief chronology of James’s life, and a new bibliography—chiefly based on James’s own references. The editor, H.G. Callaway, has included a new Introduction which elucidates the legacy of Jamesian pluralism to survey some related questions of contemporary American society. -/- A (...) Pluralistic Universe was the last major book James published during his life time. It is a substantial philosophical work, devoted to a thorough-going criticism of Hegelian monism and Absolutism—and the exploration of philosophical and social-theological alternatives. Our world of some one hundred years on is much the better for James’s contributions; and understanding James’s pluralism deeply contributes even now to America’s self-understanding. At present, we are more certain that American is, and is best, a pluralistic society, than we are of what particular forms our pluralism should take. Keeping an eye out for social interpretations of Jamesian pluralism, this new philosophical reading casts light on our twenty-first century alternatives by reference to prior American experience and developments. -/- . (shrink)
While theorizing in distinctly different times, distinctly different cultures, and under distinctly different circumstances, notable philosophical similarities can be drawn between John Dewey and Paulo Freire. This article focuses on two major themes evident in a sample of each philosopher's major works, democracy and experience, and draws theoretical comparisons between the way each philosopher approaches these concepts in terms of definition and application to educational and social practice. The author suggests that, despite some paradigmatic differences, the fundamental definitions and uses (...) of these concepts expressed by both philosophers are largely comparable and complementary. (shrink)
I begin with a personal confession. Philosophical discussions of existence have always bored me. When they occur, my eyes glaze over and my attention falters. Basically ontological questions often seem best decided by banging on the table--rocks exist, fairies do not. Argument can appear long-winded and miss the point. Sometimes a quick distinction resolves any apparent difficulty. Does a falling tree in an earless forest make noise, ie does the noise exist? Well, if noise means that an ear must be (...) there to hear it, then the answer to the question is evidently "no." But if noise means that, if there were (counterfactually) someone there, then he would hear it, then just as obviously, the answer becomes "yes.". (shrink)
The Turing Test (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in anoff-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing Machine(TM), that consequently (...) a TM that could pass the TT could be built, isattacked on the grounds that not all TMs are constructible in a plannedway. This observation points towards the importance of developmentalprocesses that use random elements (e.g., evolution), but in these casesit becomes problematic to call the result artificial. This hasimplications for the means by which intelligent agents could bedeveloped. (shrink)