Much contemporary epistemology is informed by a kind of confirmational holism, and a consequent rejection of the assumption that all confirmation rests on experiential certainties. Another prominent theme is that belief comes in degrees, and that rationality requires apportioning one's degrees of belief reasonably. Bayesian confirmation models based on Jeffrey Conditionalization attempt to bring together these two appealing strands. I argue, however, that these models cannot account for a certain aspect of confirmation that would be accounted for in any adequate (...) holistic confirmation theory. I then survey the prospects for constructing a formal epistemology that better accommodates holistic insights. (shrink)
Many still seem confident that the kind of semantic theory Putnam once proposed for natural kind terms is right. This paper seeks to show that this confidence is misplaced because the general idea underlying the theory is incoherent. Consequently, the theory must be rejected prior to any consideration of its epistemological, ontological or metaphysical acceptability. Part I sets the stage by showing that falsehoods, indeed absurdities, follow from the theory when one deliberately suspends certain devices Putnam built into it , (...) presumably in order to block such entailments. Part II then raises the decisive issue of at what cost these devices do the job they need to do. It argues that - apart from possessing no other motivation than their capacity to block the consequences derived in Part I - they only fulfil this blocking function if they render the theory unable to deal with fiction and related 'make-believe' activities. Part III indicates the affinity Putnam's account has with the classically 'denotative' view of meaning, and thus how its weaknesses may be seen as a variant of the classical weakness of 'denotative' approaches. It concludes that the theory is a conceptual muddle. (shrink)
A number of philosophers, from Thomas Reid1 through C. A. J. Coady2, have argued that one is justified in relying on the testimony of others, and furthermore, that this should be taken as a basic epistemic presumption. If such a general presumption were not ultimately dependent on evidence for the reliability of other people, the ground for this presumption would be a priori. Such a presumption would then have a status like that which Roderick Chisholm claims for the epistemic principle (...) that we are justified in believing what our senses tell us. (shrink)
The most immediately appealing model for formal constraints on degrees of belief is provided by probability theory, which tells us, for instance, that the probability of P can never be greater than that of (P v Q). But while this model has much intuitive appeal, many have been concerned to provide arguments showing that ideally rational degrees of belief would conform to the calculus of probabilities. The arguments most frequently used to make this claim plausible are the so-called "Dutch Book" (...) arguments. (shrink)
According to Hubert L. Dreyfus, Heidegger's central innovation is his rejection of the idea that intentional activity and directedness is always and only a matter of having representational mental states. This paper examines the central passages to which Dreyfus appeals in order to motivate this claim. It shows that Dreyfus misconstrues these passages significantly and that he has no grounds for reading Heidegger as anticipating contemporary anti-representationalism in the philosophy of mind. The misunderstanding derives from lack of sensitivity to Heidegger's (...) own intellectual context. The otherwise laudable strategy of reading Heidegger as a philosopher of mind becomes an exercise in finding a niche for Heidegger in Dreyfus's own unquestioned present. Heidegger is thereby mapped on to an intellectual context which, given its naturalistic commitments, is foreign to him. The paper concludes by indicating the direction in which a more historically sensitive, and thus accurate, interpretation of Heidegger must move. (shrink)
Part I [sections 2–4] draws out the conceptual links between modern conceptions of teleology and their Aristotelian predecessor, briefly outlines the mode of functional analysis employed to explicate teleology, and develops the notion of cybernetic organisation in order to distinguish teleonomic and teleomatic systems. Part II is concerned with arriving at a coherent notion of intentional control. Section 5 argues that intentionality is to be understood in terms of the representational properties of cybernetic systems. Following from this, section 6 argues (...) that intentional control needs to be seen as a particular type of relationship between the system and its environment. (shrink)
It is obvious that we would not want to demand that an agent' s beliefs at different times exhibit the same sort of consistency that we demand from an agent' s simultaneous beliefs; there' s nothing irrational about believing P at one time and not-P at another. Nevertheless, many have thought that some sort of coherence or stability of beliefs over time is an important component of epistemic rationality.
Donald Campbell has long advocated a naturalist epistemology based on a general selection theory, with the scope of knowledge restricted to vicarious adaptive processes. But being a vicariant is problematic because it involves an unexplained epistemic relation. We argue that this relation is to be explicated organizationally in terms of the regulation of behavior and internal state by the vicariant, but that Campbell's selectionist approach can give no satisfactory account of it because it is opaque to organization. We show how (...) organizational constraints and capacities are crucial to understanding both evolution and cognition and conclude with a proposal for an enriched, generalized model of evolutionary epistemology that places high-order regulatory organization at the center. (shrink)
Heidegger's central concern is the question of being (Seinsfrage). The paper reconstructs this question at least for the young (pre- Kehre) Heidegger in the light of two interconnected hypotheses: (1) the substantial content of the question of being can be identified by seeing it as a response to (Marburg) neo-Kantianism; and (2) this content centres around the claim that, pace the neo-Kantians, 'epistemological' concerns are grounded in 'ontological' ones, for which reason 'ontology' must precede 'epistemology' as a form of philosophical (...) inquiry. In section I the general position of (Marburg) neo-Kantianism is sketched. In section II the implications of the neo-Kantian position for the concepts of truth and reality, reason, and experience, are outlined; significant similarities to Sellars, Davidson, and Brandom are revealed. Finally, in section III Heidegger's analysis of everydayness is shown to yield a distinct critique of the neo-Kantian relativization of the concept of the real to the theoretically knowable. From this critique it emerges why Heidegger thinks that 'ontology' precedes 'epistemology'. The project of fundamental ontology marked by the question of being thus shows itself to be at least in part a response to the aporia of Marburg neo-Kantianism. (shrink)
In order to investigate cognition fundamental assumptions must be made about what, in general terms, it is. In cognitive science it is usually assumed that cognition is computational and representational. There have been well known disputes over these assumptions, with rival claims that cognition is dynamical, situated and embodied. In this paper I emphasize the relations between cognition and control. I present a model of cognition that makes the claim that it is a form of high-order control, and I argue (...) that viewing cognition as high-order control could be a useful framework assumption for cognitive science. Cognition has many aspects and different concepts can emphasize different aspects of it. Computational and representational assumptions have been very productive, and dynamical and embodied assumptions have also been productive, though to a much lesser extent so far. Control, however, has received insufficient attention in cognition science. The model I propose is based on a point that few will dispute, namely that control of behavior is the ultimate function of cognition. Bringing this to the foreground can be productive by highlighting the ways in which cognition is structured in relation to this control function. If this perspective is valuable it will be because the control function has a highly structuring effect on cognition. As a result a control-based perspective will predict many features of cognition and yield a coherent, integrated picture. (shrink)
At least phenomenologically the way communicative acts reveal intentions is different from the way non-communicative acts do this: the former have an "addressed" character which the latter do not. The paper argues that this difference is a real one, reflecting the irreducibly "conventional" character of human communication. It attempts to show this through a critical analysis of the Gricean programme and its methodologically individualist attempt to explain the "conventional" as derivative from the "non-conventional". It is shown how in order to (...) eliminate certain counterexamples the Gricean analysis of utterer's meaning must be made self-referential. It is then shown how this in turn admits an "ontological difference" which undercuts all methodological individualism: meaning something by an utterance must then have a certain intrinsic, irreducible "conventionality" and "intersubjectivity". Objections to this claim are raised and dealt with. It is suggested that any problem of origin might be resolvable by rejecting the semantic reductionism of Grice's programme. An internal relation between self-consciousness, intersubjectivity and language is suggested. The paper ends by speculating that the self-conscious subject is intrinsically embodied and related to other subjects in that for it its body is essentially a medium of signs with which to express its "inner states" to others. (shrink)
Both Representation Theorem Arguments and Dutch Book Arguments support taking probabilistic coherence as an epistemic norm. Both depend on connecting beliefs to preferences, which are not clearly within the epistemic domain. Moreover, these connections are standardly grounded in questionable definitional/metaphysical claims. The paper argues that these definitional/metaphysical claims are insupportable. It offers a way of reconceiving Representation Theorem arguments which avoids the untenable premises. It then develops a parallel approach to Dutch Book Arguments, and compares the results. In each case (...) preferencedefects serve as a diagnostic tool, indicating purely epistemic defects. (shrink)
In attempting to improve ethical decision-making in business organizations, researchers have developed models of ethical decision-making processes. Most of these models do not include a role for law in ethical decision-making, or if law is mentioned, it is set as a boundary constraint, exogenous to the decision process. However, many decision models in business ethics are based on cognitive moral development theory, in which the law is thought to be the external referent of individuals at the level of cognitive development (...) that most people have achieved. Other theoretical bases of ethical decision models, social learning, and experientialism, also imply a role for law that is rarely made explicit. Law is a more important aspect of ethical decision-process models than it appears to be in the models. This paper will derive explicit roles for the law from the cognition, experientialism, and social learning theories that are used to build ethical decision-making models for business behavior. (shrink)
The main appeal of the currently popular "bootstrap" account of confirmation developed by Clark Glymour is that it seems to provide an account of evidential relevance. This account has, however, had severe problems; and Glymour has revised his original account in an attempt to solve them. I argue that this attempt fails completely, and that any similar modifications must also fail. If the problems can be solved, it will only be by radical revisions which involve jettisoning bootstrapping's basic approach to (...) theories. Finally, I argue that there is little reason to think that even such drastic modifications will lead to a satisfactory account of relevance. (shrink)
It is commonly acknowledged that, in order to test a theoretical hypothesis, one must, in Duhem' s phrase, rely on a "theoretical scaffolding" to connect the hypothesis with something measurable. Hypothesis-confirmation, on this view, becomes a three-place relation: evidence E will confirm hypothesis H only relative to some such scaffolding B. Thus the two leading logical approaches to qualitative confirmation--the hypothetico-deductive (H-D) account and Clark Glymour' s bootstrap account--analyze confirmation in relative terms. But this raises questions about the philosophical interpretation (...) of the technical conditions these accounts describe. What does it mean to say that E confirms H "relative to B"? How should we interpret the relation we are trying to analyze? (shrink)
Jarrett Leplin’s paper is multifaceted; it’s rich with ideas, and I won’t even try to touch on all of them. Instead, I’d like to raise three questions about the paper: one about its definition of reliable method, one about its solution to the generality problem, and one about its answer to clairvoyance-type objections.
Since the introduction of the imitation game by Turing in 1950 there has been much debate as to its validity in ascertaining machine intelligence. We wish herein to consider a different issue altogether: granted that a computing machine passes the Turing Test, thereby earning the label of ``Turing Chatterbox'', would it then be of any use (to us humans)? From the examination of scenarios, we conclude that when machines begin to participate in social transactions, unresolved issues of trust and responsibility (...) may well overshadow any raw reasoning ability they possess. (shrink)
: The processes associated with globalization have reinforced and even increased prevailing conditions of inequality among human beings with respect to their political, economic, cultural, and social opportunities. Yet—or perhaps precisely because of this trend—there has been, within political philosophy, an observable tendency to question whether equality in fact should be treated a as central value within a theory of justice. In response, I examine a number of nonegalitarian positions to try to show that the concept of equality cannot be (...) dispensed with in any adequate consideration of justice. (shrink)
Standard approaches to cognition emphasise structures (representations and rules) much more than processes, in part because this appears to be necessary to capture the normative features of cognition. However the resultant models are in?exible and face the problem of computational intractability. I argue that the ability of real world cognition to cope with complexity results from deep and subtle coupling between cognitive and non-cognitive processes. In order to capture this, theories of cognition must shift from a structural rule-de?ned conception of (...) cognition to a thoroughgoing embedded process approach. (shrink)
This paper investigates how deans and directors at the top 50 global MBA programs (as rated by the "Financial Times" in their 2006 Global MBA rankings) respond to questions about the inclusion and coverage of the topics of ethics, corporate social responsibility, and sustainability at their respective institutions. This work purposely investigates each of the three topics separately. Our findings reveal that: (1) a majority of the schools require that one or more of these topics be covered in their MBA (...) curriculum and one-third of the schools require coverage of all three topics as part of the MBA curriculum, (2) there is a trend toward the inclusion of sustainability-related courses, (3) there is a higher percentage of student interest in these topics (as measured by the presence of a Net Impact club) in the top 10 schools, and (4) several schools are teaching these topics using experiential learning and immersion techniques. We note a fivefold increase in the number of stand-alone ethics courses since a 1988 investigation on ethics, and we include other findings about institutional support of centers or special programs; as well as a discussion of integration, teaching techniques, and notable practices in relation to all three topics. (shrink)
Glymour's "bootstrap" account of confirmation is designed to provide an analysis of evidential relevance, which has been a serious problem for hypothetico-deductivism. As set out in Theory and Evidence, however, the "bootstrap" condition allows confirmation in clear cases of evidential irrelevance. The difficulties with Glymour's account seem to be due to a basic feature which it shares with hypothetico-deductive accounts, and which may explain why neither can give a satisfactory analysis of evidential relevance.
This paper outlines an original interactivist-constructivist (I-C) approach to modelling intelligence and learning as a dynamical embodied form of adaptiveness and explores some applications of I-C to understanding the way cognitive learning is realized in the brain. Two key ideas for conceptualizing intelligence within this framework are developed. These are: (1) intelligence is centrally concerned with the capacity for coherent, context-sensitive, self-directed management of interaction; and (2) the primary model for cognitive learning is anticipative skill construction. Self-directedness is a capacity (...) for integrative process modulation which allows a system to "steer" itself through its world by anticipatively matching its own viability requirements to interaction with its environment. Because the adaptive interaction processes required of intelligent systems are too complex for effective action to be prespecified (e.g. genetically) learning is an important component of intelligence. A model of self-directed anticipative learning (SDAL) is formulated based on interactive skill construction, and argued to constitute a central constructivist process involved in cognitive development. SDAL illuminates the capacity of intelligent learners to start with the vague, poorly defined problems typically posed in realistic learning situations and progressively refine them, transforming them into problems with sufficient structure to guide the construction of a solution. Finally, some of the implications of I-C for modelling of the neuronal basis of intelligence and learning are explored; in particular, Quartz and Sejnowski's recent neural constructivism paradigm, enriched by Montague and Sejnowski's dopaminergic model of anticipative-predictive neural learning, is assessed as a promising, but incomplete, contribution to this approach. The paper concludes with a fourfold reflection on the divergence in cognitive modelling philosophy between the I-C and the traditional computational information processing approaches. (shrink)
Many modern commentators on inertial phenomena hold (or just assume) that there is no "problem of inertia", on the grounds that either (a) no explanation is needed for such phenomena or (b) the explanation is already at hand. My purpose here is to comment on both views, defending the thesis that the problem is real and still unsolved.
If it is accepted that the real marketplace does not necessarily distribute wealth in the manner that the ideal market would have done, and that societal institutions have an obligation to bring the real and ideal market distributions into accord, then it can be argued that economic actors have a responsibility to consider the effects of their activities on the distribution of wealth in society. This paper asserts that businesses have a responsibility to consider the wealth distribution effects of their (...) wealth-creating decisions. We use arguments from moral economics and Catholic social teaching to support this assertion, deriving decision principles that we apply to the Starbucks fair trade coffee case. (shrink)
Regulation is often applied to business behavior to ensure that the social costs of doing business are included in the cost and pricing structures of the firm. Because the consumer benefits from the transaction that generated the social costs, asking the consumer to bear the burden imposed by the transaction is fair. However, there may be a lack of Justice m the internal and external distribution of the social costs of doing business if consumers are the only party bearing (...) that burden, or if the costs are being shifted to employees or taxpayers when a closer stakeholder is also benefiting from the transaction – the stockowner. A social justice perspective requires that those benefiting from a transaction share in the burdens of it. We propose that a Tobin-like tax on stock transactions might be a just means of achieving greater justice in the distribution of the social cost burden. (shrink)
For the purpose of contributing to a clarification of the term process, different kinds of musical processes are investigated: A rule-determined phase shifting process in Steve Reich's Piano Phase (1966), a model for an indeterminate composition process in John Cage's Variations II (1961), a number of evolution processes in György Ligeti's In zart fliessender Bewegung (1976), and a generative process of fractal nature in Per Nørgård's Second Symphony (1970). In conclusion I propose that six process categories should be included in (...) a typology of processes: Rule-determined, goal-directed and indeterminate transformation processes, and rule-determined, goal-directed and indeterminate generative processes. (shrink)
It appears to be a straightforward implication of distributed cognition principles that there is no integrated executive control system (e.g. Brooks 1991, Clark 1997). If distributed cognition is taken as a credible paradigm for cognitive science this in turn presents a challenge to volition because the concept of volition assumes integrated information processing and action control. For instance the process of forming a goal should integrate information about the available action options. If the goal is acted upon these processes should (...) control motor behavior. If there were no executive system then it would seem that processes of action selection and performance couldn’t be functionally integrated in the right way. The apparently centralized decision and action control processes of volition would be an illusion arising from the competitive and cooperative interaction of many relatively simple cognitive systems. Here I will make a case that this conclusion is not well-founded. Prima facie it is not clear that distributed organization can achieve coherent functional activity when there are many complex interacting systems, there is high potential for interference between systems, and there is a need for focus. Resolving conflict and providing focus are key reasons why executive systems have been proposed (Baddeley 1986, Norman and Shallice 1986, Posner and Raichle 1994). This chapter develops an extended theoretical argument based on this idea, according to which selective pressures operating in the evolution of cognition favor high order control organization with a ‘highest-order’ control system that performs executive functions. (shrink)
Quipus, knotted structures of woollen or cotton cords, were used as a bureaucratic tool in the Inca state. In the absense of a writing system, numerals and possibly other pieces of information were encoded on the quipus by tying knots into elaborately structured coloured cords. Though interpretation of the quipu contents is far from complete, some information on Inca mathematics can be deducted from the analysis of ancient specimen, especially when combined with the results of anthropological and linguistic research in (...) contemporary Andean societies. In this paper, the quipus are introduced, their structure is explained, and some results on mathematical concepts of the Incas are presented based on a comparison of mathematical and anthropological literature on the subject. (shrink)
The basal and reciprocal models of the relationship between androgen secretion and dominance are not mutually exclusive. Individuals may differ in basal levels of androgen secretion, reactivity to experiences, and androgen sensitivity. Early experiences might affect any of these parameters.
The shapes of neurons and glial cells dictate many important aspects of their functions. In olfactory systems, certain architectural features are characteristics of these two cell types across a wide variety of species. The accumulated evidence suggests that these common features may play fundamental roles in olfactoryinformation processing. For instance, the primary olfactory neuropil in most vertebrate and invertebrate olfactory systems is organized into discrete modules called glomeruli. Inside each glomerulus, sensory axons and CNS neurons branch and synapse in patterns (...) that are repeated across species. In many species, moreover, the glomeruli are enveloped by a thin and ordered layer of glial processes. Theglomerular arrangement reflects the processing of odor information in modules that encode the discrete molecular attributes of odorant stimuli being processed. Recent studies of the mechanisms that guide the development of olfactory neurons and glial cells have revealed complex reciprocal interactions between these two cell types, which may be necessary for the establishment of modular compartments. Collectively, the findings reviewed here suggest that specialized cellular architecture plays key functional roles in the detection, analysis, and discrimination of odors at early steps in olfactory processing. (shrink)
The general structure of Steels & Belpaeme's (S&B's) central premise is appealing. Theoretical stances that focus on one type of mechanism miss the fact that multiple mechanisms acting in concert can provide convergent constraints for a more robust capacity than any individual mechanism might achieve acting in isolation. However, highlighting the significance of complex constraint interactions raises the possibility that some of the relevant constraints may have been left out of S&B's own models. Although abstract modeling can help clarify issues, (...) it also runs the risk of oversimplification and misframing. A more subtle implication of the significance of interacting constraints is that it calls for a close relationship between theoretical and empirical research. (shrink)
Ronald Dworkin’s work on the topic of equality over the past twenty-five years or so has been enormously influential, generating a great deal of debate about equality both as a practical aim and as a theoretical ideal. The present article attempts to assess the importance of one particular aspect of this work. Dworkin claims that the acceptance of abstract egalitarian rights to equal concern and respect can be thought to provide a kind of plateau in political argument, accommodating as (...) it does a number of well-known ethical theories of social arrangement from utilitarianism to libertarianism. The article explores the moral foundations of these egalitarian rights and critically examines five specific reasons for supposing they matter in political debate. It is argued that though these reasons are perhaps less constructive than they might be reasonably expected to be, there is another more fundamental question we can ask about the scope of egalitarian rights the answer to which might ultimately help to explain their fundamental nature and importance. That question is: equality among whom? (shrink)
Ronald Dworkin occupies a distinctive place in both public life and philosophy. In public life, he is a regular contributor to The New York Review of Books and other widely read journals. In philosophy, he has written important and influential works on many of the most prominent issues in legal and political philosophy. In both cases, his interventions have in part shaped the debates he joined. His opposition to Robert Bork's nomination for the United States Supreme Court gave new (...) centrality to debates about the public role of judges and the role of original intent in constitutional interpretation. His writings in legal philosophy have reoriented the modern debate about legal positivism and natural law. In political philosophy, he has shaped the ways in which people debate the nature of equality; he has spawned a substantial literature about the relation between luck and responsibility in distributive justice; he has reframed debates about the sanctity of life. His work has also been the focus of many recent discussions of both democracy and the rule of law. This volume contains new essays on Dworkin's key contributions by writers who have themselves made important interventions in the debates. (shrink)
This is a lucid and comprehensive introduction to, and critical assessment of, Ronald Dworkin's seminal contributions to legal and political philosophy. His theories have a complexity, originality, and moral power that have excited a wide range of academic and political thinkers, and even those who disagree with him acknowledge that his ideas must be confronted and given serious consideration. His enormous output of books and papers and his formidable profusion of lectures and seminars throughout the world, in addition to (...) his teaching duties at Oxford and New York University, have made him a giant figure in contemporary thought. In short, Dworkin's theory of law is that the nature of legal argument lies in the best moral interpretation of existing social practices. His theory of Justice is that all political judgements ought to rest ultimately upon the injunction that people are equal as human beings, irrespective of the circumstances in which they are born. Dworkin does not fit into an orthodox category. his theory of law is radical in that it sees legal argument primarily about rights yet conservative in seeing it as constrained by history. He is libertarian both in valuing ambition and in asserting a right to pornography, yet socialist in believing that no person has a right to a greater share of resources than anyone else. in particular, he advocates a system that would tax people on the resources they accumulate solely through their talent alone. Because Dworkin writes for a number of audiences - sometimes the general public, sometimes academic lawyers, sometimes philosophers and economists - it is often difficult to identify the different strands of his thought. The book aims to make his theories clear and accessible and to give an overall picture of his thinking that is sympathetic yet rigorously argued. (shrink)
Carleton B. Christensen, Self and World: From Analytic Philosophy to Phenomenology Content Type Journal Article DOI 10.1007/s10743-010-9078-2 Authors Morten S. Thaning, Department of Philosophy, Politics, and Management, Copenhagen Business School, Solbjerg Plads 3, 2000 Frederiksberg, Denmark Journal Husserl Studies Online ISSN 1572-8501 Print ISSN 0167-9848 Journal Volume Volume 26 Journal Issue Volume 26, Number 3.
Exploring Law's Empire is a collection of essays by leading legal theorists and philosophers who have been invited to develop, defend, or critique Ronald Dworkin's controversial and exciting jurisprudence. The volume explores Dworkin's critique of legal positivism, his theory of law as integrity, and his writings on constitutional jurisprudence. Each essay is a cutting-edge contribution to its field of inquiry, the highlights of which include an introduction by Justice Stephen Breyer of the United States Supreme Court, and a concluding (...) essay by Dworkin himself. This final chapter responds to the preceding essays and lays out Dworkin's own vision for the future of jurisprdence over the coming years. (shrink)
Ronald Coase is usually considered anything but a methodologist. Thus, it is not surprising that, in the introduction to "How Should Economists Choose?", which is the only paper Coase wrote on a methodological topic, he readily confessed his relative ignorance of philosophy of science, candidly observing that "Words like epistemology do not come tripping from my tongue" (HSEC, 6). However, given the importance of this Nobel Prize winner's contribution to the renewal of theoretical thinking in economics, everyone should admit (...) that his infrequent reflections on methodology, however crude they might look to methodologists, undoubtedly merit close consideration. Whatever methodological clarification these reflections might procure, it is surely worthwhile briefly to analyze them, with the hope of understanding a little better some dimensions of Ronald Coase's way of thinking and, more specifically, of emphasizing some implications of the close relationship between his methodological views and his radical conception of the market. With this in mind, I will first illustrate how far these methodological views seem, at first glance, to be dominated by a fundamental commitment to a straightforward realism; I will then show that they are instead subordinated, in a more complex way, to what I have just called Coase's radical conception of the market and, in conclusion, I will point out some questions raised by this situation. (shrink)
I distill three somewhat interrelated approaches to the ethical criticism of humor: (1) attitude-based theories, (2) merited-response theories, and (3) emotional responsibility theories. I direct the brunt of my effort at showing the limitations of the attitudinal endorsement theory by presenting new criticisms of Ronald de Sousa’s position. Then, I turn to assess the strengths of the other two approaches, showing that that their major formulations implicitly require the problematic attitudinal endorsement theory. I argue for an effects-mediated responsibility theory (...) , holding that the strongest ethical criticism that can be made of our sense of humor is that it might indicate some omission on our part. This omission could only be culpable in so far as a particular joke could do harm to oneself or others. In response to Ted Cohen’s doubts that such a mechanism of harm is forthcoming, I argue that the primary vehicle of the harmful effects of humor is laughter. (shrink)
In the first part of this article, I raisequestions about Dworkin''s theory of theintrinsic value of life and about the adequacyof his proposal to understand abortion in termsof different ways of valuing life. In thesecond part of the article, I consider hisargument in ``The Philosophers'' Brief on AssistedSuicide'''', which claims that the distinctionbetween killing and letting die is morallyirrelevant, the distinction between intendingand foreseeing death can be morally relevantbut is not always so. I argue that thekilling/letting die distinction can be (...) relevantin the context of assisted suicide, but alsoshow when it is not. Then I consider why theintention/foresight distinction can be morallyirrelevant and conclude by presenting analternative argument for physician-assistedsuicide. (shrink)