Jacques Derrida and Jürgen Habermas have long represented opposite camps in contemporary thought. Derrida, who pioneered the intellectual style of inquiry known as deconstruction, ushered in the postmodern age with his dramatic critique of reason; Habermas, on the other hand, has consistently argued in defense of reason, modernity, and the legacy of the Enlightenment. Their many differences led to a long-standing, if scattered, dialogue, evidence of which has been available in only bits and pieces. But now, for the first time, (...) The Derrida-Habermas Reader brings these pieces together, along with a collection of essays documenting the intellectual relationship between two of the twentieth century’s preeminent thinkers. Taken together, Derrida and Habermas’s writings—combined here with contributions by other prominent philosophers and social theorists—tell the story of the two thinkers’ provocative engagement with each other’s ideas. Beyond exploring the conflict between Derrida’s deconstruction and Habermas’s communicative rationality, they show how the Derrida-Habermas encounter changed over the years, becoming more theoretically productive without ever collapsing into mutual rejection or simple compromise. Lasse Thomassen has divided the essays—including works on philosophy and literature, ethics, politics, and international law—into four parts that cover the full range of thought in which Derrida and Habermas engaged. The last of these sections fittingly includes the thinkers’ jointly signed work on European solidarity and the Iraq War, highlighting the hopes they held in common despite their differences. The wide breadth of this book, along with Thomassen’s lucid introductions to each section, makes The Derrida-Habermas Reader an ideal starting point for anyone interested in one of the most dynamic intellectual debates of our time. (shrink)
This article argues that we must abandon the still predominant view of modernity as based upon a separation between the secular and the religious - a “separation” which is allegedly now brought into question again in “postsecularity”. It is more meaningful to start from the premise that religion and politics have always co-existed in various fields of tension and will continue to do so. The question then concerns the natures and modalities of this tension, and how one can articulate a (...) publically grounded reason with reference to it. It will first be argued that this question cannot be articulated, let alone fully answered, from the position developed by John Rawls. A different approach will then be developed, building on the writings of Eric Voegelin. This involves a much more serious engagement with the classical tradition in thought and philosophy than found in Rawls. It also implies much more than a “pragmatic” recognition of religion as a possible source for overlapping consensus, since for Voegelin a true, balanced rationality can only depart from an experientially grounded encounter with the transcendent. (shrink)
In his most recent work, Jürgen Habermas has proposed a deliberative account of tolerance where the norms of tolerance--including the threshold of tolerance and the norms regulating the relationship between the tolerating and the tolerated parties--are the outcomes of deliberations among the citizens affected by the norms. He thinks that in this way, the threshold of tolerance can be rationalized and the relationship between tolerating and tolerated will rest on the symmetrical relations of public deliberations. In this essay, and inspired (...) by Jacques Derrida's work on the concept of hospitality, I propose a deconstructive reading of Habermas's writings on tolerance. I argue that Habermas is ultimately unable to provide a rational foundation for tolerance and that his conception of tolerance encounters the same problems he is trying to avoid, namely, the contingency of the threshold of tolerance and a paternalistic relation between tolerating and tolerated. Yet, contra Habermas, the deconstruction of tolerance does not result in its destruction and does not force us to give up on the concept and practice of tolerance. (shrink)
‘The Second Mistake’ (TSM) is to think that if an act is right or wrong because of its effects, the only relevant effects are the effects of this particular act. This is not (as some think) a truism, since ‘the effects of this particular act’ and ‘its effects’ need not co-refer. Derek Parfit's rejection of TSM is based mainly on intuitions concerning sets of acts that over-determine certain harms. In these cases, each act belongs to the relevant set in virtue (...) of a causal relation (other than marginal contribution) to a specific harmful event. This feature may make an act wrong, in a fashion consequentialists could admit. That explication of TSM does not rely on the questionable assumption that the set of acts is what harms here. Independently of this, there are several other reasons to prefer it to the ‘mere participation’ approach. Correspondence:c1 firstname.lastname@example.org. (shrink)
A broad range of evidence regarding the functional organization of the vertebrate brain – spanning from comparative neurology to experimental psychology and neurophysiology to clinical data – is reviewed for its bearing on conceptions of the neural organization of consciousness. A novel principle relating target selection, action selection, and motivation to one another, as a means to optimize integration for action in real time, is introduced. With its help, the principal macrosystems of the vertebrate brain can be seen to form (...) a centralized functional design in which an upper brain stem system organized for conscious function performs a penultimate step in action control. This upper brain stem system retained a key role throughout the evolutionary process by which an expanding forebrain – culminating in the cerebral cortex of mammals – came to serve as a medium for the elaboration of conscious contents. This highly conserved upper brainstem system, which extends from the roof of the midbrain to the basal diencephalon, integrates the massively parallel and distributed information capacity of the cerebral hemispheres into the limited-capacity, sequential mode of operation required for coherent behavior. It maintains special connective relations with cortical territories implicated in attentional and conscious functions, but is not rendered nonfunctional in the absence of cortical input. This helps explain the purposive, goal-directed behavior exhibited by mammals after experimental decortication, as well as the evidence that children born without a cortex are conscious. Taken together these circumstances suggest that brainstem mechanisms are integral to the constitution of the conscious state, and that an adequate account of neural mechanisms of conscious function cannot be confined to the thalamocortical complex alone. (Published Online May 1 2007) Key Words: action selection; anencephaly; central decision making; consciousness; control architectures; hydranencephaly; macrosystems; motivation; target selection; zona incerta. (shrink)
Logical pluralism has been in vogue since JC Beall and Greg Restall 2006 articulated and defended a new pluralist thesis. Recent criticisms such as Priest 2006a and Field 2009 have suggested that there is a relationship between their type of logical pluralism and the meaning-variance thesis for logic. This is the claim, often associated with Quine 1970, that a change of logic entails a change of meaning. Here we explore the connection between logical pluralism and meaning-variance, both in general and (...) for Beall and Restall's theory specifically. We argue that contrary to what Beall and Restall claim, their type of pluralism is wedded to meaning-variance. We then develop an alternative form of logical pluralism that circumvents at least some forms of meaning-variance. (shrink)
It is sometimes held that rules of inference determine the meaning of the logical constants: the meaning of, say, conjunction is fully determined by either its introduction or its elimination rules, or both; similarly for the other connectives. In a recent paper, Panu Raatikainen (2008) argues that this view - call it logical inferentialism - is undermined by some "very little known" considerations by Carnap (1943) to the effect that "in a definite sense, it is not true that the standard (...) rules of inference" themselves suffice to "determine the meanings of [the] logical constants" (p. 2). In a nutshell, Carnap showed that the rules allow for non-normal interpretations of negation and disjunction. Raatikainen concludes that "no ordinary formalization of logic ... is sufficient to `fully formalize' all the essential properties of the logical constants" (ibid.). We suggest that this is a mistake. Pace Raatikainen, intuitionists like Dummett and Prawitz need not worry about Carnap's problem. And although bilateral solutions for classical inferentialists - as proposed by Timothy Smiley and Ian Rumfitt - seem inadequate, it is not excluded that classical inferentialists may be in a position to address the problem too. (shrink)
Following Quine, Davidson, and Dennett, I take mental states and linguistic meaning to be individuated with reference to interpretation. The regulative principle of ideal interpretation is to maximize rationality, and this accounts for the distinctiveness and autonomy of the vocabulary of agency. This rationality-maxim can accommodate empirical cognitive-psychological investigation into the nature and limitations of human mental processing. Interpretivism is explicitly anti-reductionist, but in the context of Rorty's neo-pragmatism provides a naturalized view of agents. The interpretivist strategy affords a less (...) despondent view of constructive philosophical activity than Rorty's own. (shrink)
Contributing Authors: Lilli Alanen & Frans Svensson, David Alm, Gustaf Arrhenius, Gunnar Björnsson, Luc Bovens, Richard Bradley, Geoffrey Brennan & Nicholas Southwood, John Broome, Linus Broström & Mats Johansson, Johan Brännmark, Krister Bykvist, John Cantwell, Erik Carlson, David Copp, Roger Crisp, Sven Danielsson, Dan Egonsson, Fred Feldman, Roger Fjellström, Marc Fleurbaey, Margaret Gilbert, Olav Gjelsvik, Kathrin Glüer & Peter Pagin, Ebba Gullberg & Sten Lindström, Peter Gärdenfors, Sven Ove Hansson, Jana Holsanova, Nils Holtug, Victoria Höög, Magnus Jiborn, Karsten Klint Jensen, (...) Sigurður Kristinsson, Isaac Levi, Kasper Lippert-Rasmussen, David Makinson, Anna-Sofia Maurin, Philippe Mongin, Kevin Mulligan, Lennart Nordenfelt, Jonas Olson, Erik J. Olsson, Ingmar Persson, Johannes Persson, Björn Petersson, Philip Pettit, Hans Rott, Toni Rønnow-Rasmussen, Krister Segerberg, John Skorupski, Howard Sobel, Fredrik Stjernberg, Fred Stoutland, Caj Strandberg, Pär Sundström, Folke Tersman, Torbjörn Tännsjö, Peter Vallentyne, Bruno Verbeek, Stella Villarmea, and Michael J. Zimmerman. (shrink)
My response addresses general commentary themes such as my neglect of the forebrain contribution to human consciousness, the bearing of blindsight on consciousness theory, the definition of wakefulness, the significance of emotion and pain perception for consciousness theory, and concerns regarding remnant cortex in children with hydranencephaly. Further specific topics, such as phenomenal and phylogenetic aspects of mesodiencephalic-thalamocortical relations, are also discussed. (Published Online May 1 2007).
‘Natural selection’ is, it seems, an ambiguous term. It is sometimes held to denote a consequence of variation, heredity, and environment, while at other times as denoting a force that creates adaptations. I argue that the latter, the force interpretation, is a redundant notion of natural selection. I will point to difficulties in making sense of this linguistic practise, and argue that it is frequently at odds with standard interpretations of evolutionary theory. I provide examples to show this; one example (...) involving the relation between adaptations and other traits, and a second involving the relation between selection and drift. (shrink)
This article introduces compliance disclosure regimes to business ethics research. Compliance disclosure is a relatively recent regulatory technique whereby companies are obliged to disclose the extent to which they comply with codes, ‘best practice standards’ or other extra-legal texts containing norms or prospective norms. Such ‘compliance disclosure’ obligations are often presented as flexible regulatory alternatives to substantive, command-and-control regulation. However, based on a report on experiences of existing compliance disclosure obligations, this article will identify major weaknesses that prevent them from (...) becoming effective mechanisms to discipline a certain type of behaviour. It will be argued that regulatory recourse to compliance disclosure obligations is nonetheless worthwhile if we view them as mechanisms that can initiate a dialogue about norm interpretation, application and norm desirability. From this perspective, compliance disclosure obligations serve less to discipline companies by making corporate practices transparent, and more to trigger a process of norm development, in which the law, companies and their stakeholders interact. This article provides an illustration of how mandatory disclosure, if it is restricted to a unilateral communication process, may produce no effective results (or even prove counterproductive), whilst highlighting the alternative potential of disclosure as an initiator of dialogue, supported by laws, geared towards the development and refinement of norms applicable to business in a global context and the values they promote. (shrink)
The focus of this article is the analysis of generative mechanisms, a basic concept and phenomenon within the metatheoretical perspective of critical realism. It is emphasized that research questions and methods, as well as the knowledge it is possible to attain, depend on the basic view – ontologically and epistemologically – regarding the phenomenon under scrutiny. A generative mechanism is described as a trans empirical but real existing entity, explaining why observable events occur. Mechanisms are mostly possible to grasp only (...) indirectly by analytical work (theory-building), based however on empirical observations. In order to achieve such an explanatory analysis, five methodological steps are suggested and discussed, among them abduction and retroduction. These steps are illustrated throughout by examples drawn from empirical research regarding social work practice. The article is concluded with a discussion of the need for knowledge of generative mechanisms. (shrink)
Can it be better or worse for a person to be than not to be, that is, can it be better or worse to exist than not to exist at all? This old 'existential question' has been raised anew in contemporary moral philosophy. There are roughly two reasons for this renewed interest. Firstly, traditional so-called “impersonal” ethical theories, such as utilitarianism, have counter-intuitive implications in regard to questions concerning procreation and our moral duties to future, not yet existing people. Secondly, (...) it has seemed evident to many that an outcome can only be better than another if it is better for someone, and that only moral theories that are in this sense “person affecting” can be correct. The implications of this Person Affecting Restriction will differ radically, however, depending on which answer one gives to the existential question. Melinda Roberts (2003) and Matthew Adler (2009) have defended an affirmative answer to the existential question using an assumption that one can asribe a zero level of wellbeing to a person in a world in which that person doesn't exist. Contrariwise, Derek Parfit (1984), John Broome (1999), and others have worried that if we take a person’s life to be better for her than non-existence, then we would have to conclude that it would have been worse for her if she did not exist, which is absurd: Nothing would have been worse or better for a person if she had not existed. The paper suggests that an affirmative answer to the existential question can avoid such absurdities: One can claim that, say, it is better for a person to exist than not to exist, without implying that it would have been worse for a person if she had not existed or that her level of wellbeing would then have been lower. (shrink)
Boyer & Lienard's (B&L's) biological model of ritual achieves a rather straightforward account of features shared by ritual pathology and the idiosyncratic rituals of children; but complexities accrue in extending it to human ritual culture generally. My commentary suggests that the ritual cultural traditions of animals such as songbirds share structural features, handicap-based origin, as well as the enabling neural mechanism of vocal learning with human ritual culture. (Published Online February 8 2007).
The processes of economic integration induced by globalization have brought about a certain type of legal practice that challenges the core values of legal ethics. Law firms seeking to represent the interests of internationally active corporate clients must embrace and systematically apply concepts of strategic management and planning and install corporate business structures to sustain competition for lucrative clients. These measures bear a high conflict potential with the core values of legal ethics. However, we observe in parallel a global consolidation (...) of these core values through an enhanced cooperation of national professional bodies, the use of international codes, and comparative legal ethics teaching and research. Furthermore, state regulation of the legal profession is concerned with preserving the core values of legal ethics to conserve the lawyer's role in upholding the rule of law. This article defends that legal ethics is adapting to the pressures exerted by "managerial" approaches to legal practice without this altering core values that underlie legal ethics. (shrink)
"Forgetting" plays an important role in the lives of individuals and communities. Although a few Holocaust scholars have begun to take forgetting more seriously in relation to the task of remembering—in popular parlance as well as in academic discourse on the Holocaust—forgetting is usually perceived as a negative force. In the decades following 1945, the terms remembering and forgetting have often been used antithetically, with the communities of victims insisting on the duty to remember and a society of perpetrators desiring (...) to forget. Thus, the discourse on Holocaust memory has become entrenched on this issue. This essay counters the swift rejection of forgetting and its labeling as a reprehensible act. It calls attention to two issues: first, it offers a critical argument for different forms of forgetting; second, it concludes with suggestions of how deliberate performative practices of forgetting might benefit communities affected by a genocidal past. Is it possible to conceive of forgetting not as the ugly twin of remembering but as its necessary companion? (shrink)
An impressive amount of evidence from psychology, cognitive neurology, evolutionary psychology and primatology seems to be converging on a ‘dual process’ model of moral or practical (in the philosophical sense) psychology according to which our practical judgments are generated by two distinct processes, one ‘emotive-intuitive’ and one ‘cognitive-utilitarian’. In this paper I approach the dual process model from several directions, trying to shed light on various aspects of our moral and practical lives.
The paper describes and defends an eclectic approach to narrative explanation in history and social sciences (as well as in natural history). The view of narrative explanation defended allows combinations of several recent ideas concerning the nature of narrative explanation.The guiding idea is that the explanatory power of narratives consists in their capacity to accommodate various forms of explanations and interpretations. Narrative explanations are seen as theories abouthappenings that may consist of diverse forms of explanations, interpretations and explanation sketches. There (...) is no single form of narrative explanation, rather narrative is seen as a form tor synthesizing various explanations.Several problems concerning explanation and narrative are discussed with relation to the proposed approach: laws in explanations, literary or fictional aspects of narratives, relativism, constructivism and noncognitivism or antirealism. Hayden White’s theory of the explanatory role of “emplotment” is discussed and criticized.The upshot is that the eclectic approach defended does not face any problems unique to it: problems faced are general epistemological problem. The literary aspects of historical narrative are interpreted as normative and rhetorical, making the relevance of these aspects tor narrative explanation depend on the question whether there are legitimate moral explanations. (shrink)
The communication of emotion in music has with few exceptions, as L. B. Meyer´s Emotion and Meaning in Music (1956) and the contour theory (Kivy 1989, 2002), focused on music structure as representations of emotions. This implies a semiotic approach - the assumption that music is a kind of language that could be read and decoded. Such an approach is largely restricted to the conscious level of knowing, understanding and communication. We suggest an understanding of music and emotion based on (...) action-perception theory - present moment perception, implicit knowledge and imitation. This theory does not demand consciousness or the use of signs. Neuroscientific findings (adaptive oscillators, mirror neurons) are in concordance with our suggestion. Recently these findings have generated articles on empathy – relevant to the understanding of music and emotion. (shrink)
It is a plain fact that biology makes use of terms and expressions commonly spoken of as teleological. Biologists frequently speak of the function of biological items. They may also say that traits are 'supposed to' perform some of their effects, claim that traits are 'for' specific effects, or that organisms have particular traits 'in order to' engage in specific interactions. There is general agreement that there must be something useful about this linguistic practice but it is controversial whether it (...) is entirely appropriate, and if so why it is.Many theorists have defended the use of seemingly teleological terms by appeal to an etiological notion of function (Wright, 1973; Millikan, 1984, 2002; Neander, 1991; .. (shrink)
Sometimes it seems intuitively plausible to hold loosely structured sets of individuals morally responsible for failing to act collectively. Virginia Held, Larry May, and Torbj rn T nnsj have all drawn this conclusion from thought experiments concerning small groups, although they apply the conclusion to large-scale omissions as well. On the other hand it is commonly assumed that (collective) agency is a necessary condition for (collective) responsibility. If that is true, then how can we hold sets of people responsible for (...) not having acted collectively? This paper argues that that loosely structured inactive groups sometimes meet this requirement if we employ a weak (but nonetheless non-reductionist) notion of collective agency. This notion can be defended on independent grounds. The resulting position on distribution of responsibility is more restrictive than Held's, May's or T nnsj 's, and this consequence seems intuitively attractive. (shrink)
. Björn Lindbloms account of the emergence of phonemic structure is a central reference point in contemporary discussions of the emergence of language. I argue that there are two distinct, and largely orthogonal conceptions of emergence implicit in Lindbloms account. According to one conception (causal emergence), the process by which minimal pairs are generated is crucial to the claim that phonemic structure is emergent; according to the other conception (analytic emergence), the fact that segments are an abstraction from the physical (...) signal is what is crucial to the description of phonemic structure as emergent. The purpose of distinguishing rather than conflating these two conceptions of emergence is not in the first instance to criticize Lindbloms account or to force us to choose between the two conceptions for consistency, but rather to give us a more detailed purchase on the notoriously thorny concept of emergent explanation. (shrink)
Abstract: By focusing on contributions to the literature on function ascription, this article seeks to illustrate two problems with philosophical accounts that are presented as having descriptive aims. There is a motivational problem in that there is frequently no good reason why descriptive aims should be important, and there is a methodological problem in that the methods employed frequently fail to match the task description. This suggests that the task description as such may be the result of “default descriptivism,” a (...) tendency to take considerations that make sense of a practice to be the very considerations that generate it. Although such hypotheses are frequently quite plausible, the fact of the matter may not be very important for the pursuits of philosophers. (shrink)
It is often stated that the image of the world which our senses present to us contradicts the scientific worldview in important respects. I challenge this position through a number of arguments centered on the nature of perception and of perceived qualities.
Portable Grammar Format (PGF) is a core language for type-theoretical grammars. It is the target language to which grammars written in the high-level formalism Grammatical Framework (GF) are compiled. Low-level and simple, PGF is easy to reason about, so that its language-theoretic properties can be established. It is also easy to write interpreters that perform parsing and generation with PGF grammars, and compilers converting PGF to other formats. This paper gives a concise description of PGF, covering syntax, semantics, and parser (...) generation. It also discusses the technique of embedded grammars, where language processing tasks defined by PGF grammars are integrated in larger systems. (shrink)
In this paper, we try to shed light on the ontological puzzle pertaining to models and to contribute to a better understanding of what models are. Our suggestion is that models should be regarded as a specific kind of signs according to the sign theory put forward by Charles S. Peirce, and, more precisely, as icons, i.e. as signs which are characterized by a similarity relation between sign (model) and object (original). We argue for this (1) by analyzing from a (...) semiotic point of view the representational relation which is characteristic of models. We then corroborate our hypothesis (2) by discussing the conceptual differences between icons, i.e. models, and indexical and symbolic signs and (3) by putting forward a general classification of all icons into three functional subclasses (images, diagrams, and metaphors). Subsequently, we (4) integratively refine our results by resorting to two influential and, as can be shown, complementary philosophy of science approaches to models. This yields the following result: models are determined by a semiotic structure in which a subject intentionally uses an object, i.e. the model, as a sign for another object, i.e. the original, in the context of a chosen theory or language in order to attain a specific end by instituting a representational relation in which the syntactic structure of the model, i.e. its attributes and relations, represents by way of a mapping the properties of the original, which hence are regarded as similar in a relevant manner. (shrink)
A non-productivist Marxism departing from the analysis of capitalism’s “dialectic of scarcity” can make a valuable contribution to the field of environmental ethics. On the one hand, the analysis of capitalism’s dialectic of scarcity shows that the ethical yardstick by which capitalism should be measured is immanent in this social system’s dynamic tendencies. On the other hand, this analysis exposes capitalism’s inability to fulfill the potential for an ecologically sustainable society without unnecessary human suffering that capitalism’s technological dynamism generates. This (...) argument can be illustrated by a critical analysis of Bjorn Lomborg’s The Skeptical Environmentalist. An exploration of capitalism’s dialectic of scarcity can bring to light those weaknesses and internal contradictions of antiecological discourses that are likely to escape the attention of non-Marxist ecologists. This analysis shows that to the extent capitalism’s dialectic of scarcity encourages the fragmentation of social justice and environmental movements, a critical analysis of this dialectic can contribute to the formation of the alliance of emancipatory movements that the attainment of a just and ecologically sustainable society presupposes. (shrink)
Ve svém článku 'Hledání hyperintenzionality' se Bjorn Jespersen pokusil rekapitulovat a zdůvodnit způsob, jakým TIL vykládá pojem významu. Jakkoli si myslím, že se Jespersenovi (nikoli jenom v tomto článku) daří TIL předvádět způsobem srozumitelným i přitažlivým i pro outsidery..
Summary In two articles Friedrich Rapp argues that there is a methodological symmetry between falsification and verification in contradistinction to the logical asymmetry that obtains between them. (The Methodological Symmetry between Verification and Falsification,Ztschr. f. Allg. Wissth., Band VI/1 (1975), pp 139â144; A Helpful Argument â Reply to K. Eichner,Ztschr. f. Allg. Wissth., Band VII/1 (1976), pp. 121â123). Rapp puts forward the thesis that methodological falsification of a theory T implies the acceptance of an inference from ~ (x) Tx to (...) (x) ~ Tx. However, this thesis does not have to be accepted even if the premises of Rapp's argument were accepted. Furthermore, Rapp has not shown that the falsification of a theory T implies that T will not be retained. Neither has Rapp formulated assumptions that are sufficient to guarantee that the outcome of an intended test of a theory T can be considered as an outcome of an actual test of T. (shrink)
Using an articulatory model we show that locus equations make special use of the phonetic space of possible locus patterns. There is nothing articulatorily inevitable about their linearity or slope- intercept characteristics. Nonetheless, articulatory factors do play an important role in the origin of simulated locus equations, but they cannot, by themselves, provide complete explanations for the observed facts. As in other domains, there is interaction between perceptual and motor factors.
The goal of the present set of studies is to explore the boundary conditions of category transfer in causal learning. Previous research has shown that people are capable of inducing categories based on causal learning input, and they often transfer these categories to new causal learning tasks. However, occasionally learners abandon the learned categories and induce new ones. Whereas previously it has been argued that transfer is only observed with essentialist categories in which the hidden properties are causally relevant for (...) the target effect in the transfer relation, we here propose an alternative explanation, the unbroken mechanism hypothesis. This hypothesis claims that categories are transferred from a previously learned causal relation to a new causal relation when learners assume a causal mechanism linking the two relations that is continuous and unbroken. The findings of two causal learning experiments support the unbroken mechanism hypothesis. (shrink)
In discussions of moral responsibility for collectively produced effects, it is not uncommon to assume that we have to abandon the view that causal involvement is a necessary condition for individual co-responsibility. In general, considerations of cases where there is “a mismatch between the wrong a group commits and the apparent causal contributions for which we can hold individuals responsible” motivate this move. According to Brian Lawson, “solving this problem requires an approach that deemphasizes the importance of causal contributions”. Christopher (...) Kutz’s theory of complicitious accountability in Complicity from 2000 is probably the most well-known approach of that kind. Standard examples are supposed to illustrate mismatches of three different kinds: an agent may be morally co-responsible for an event to a high degree even if her causal contribution to that event is a) very small, b) imperceptible, or c) non-existent (in overdetermination cases). From such examples, Kutz and others conclude that principles of complicitious accountability cannot include a condition of causal involvement. In the present paper, I defend the causal involvement condition for co-responsibility. These are my lines of argument: First, overdetermination cases can be accommodated within a theory of coresponsibility without giving up the causality condition. Kutz and others oversimplify the relation between counterfactual dependence and causation, and they overlook the possibility that causal relations other than marginal contribution could be morally relevant. Second, harmful effects are sometimes overdetermined by non-collective sets of acts. Over-farming, or the greenhouse effect, might be cases of that kind. In such cases, there need not be any formal organization, any unifying intentions, or any other noncausal criterion of membership available. If we give up the causal condition for coresponsibility it will be impossible to delimit the morally relevant set of acts related to those harms. Since we sometimes find it fair to blame people for such harms, we must question the argument from overdetermination. Third, although problems about imperceptible effects or aggregation of very small effects are morally important, e.g. when we consider degrees of blameworthiness or epistemic limitations in reasoning about how to assign responsibility for specific harms, they are irrelevant to the issue of whether causal involvement is necessary for complicity. Fourth, the costs of rejecting the causality condition for complicity are high. Causation is an explicit and essential element in most doctrines of legal liability and it is central in common sense views of moral responsibility. Giving up this condition could have radical and unwanted consequences for legal security and predictability. However, it is not only for pragmatic reasons and because it is a default position that we should require stronger arguments (than conflicting intuitions about “mismatches”) before giving up the causality condition. An essential element in holding someone to account for an event is the assumption that her actions and intentions are part of the explanation of why that event occurred. If we give up that element, it is difficult to see which important function responsibility assignments could have. (shrink)
It is not unreasonable to think that the dispute between classical and intuitionistic mathematics might be unresolvable or 'faultless', in the sense of there being no objective way to settle it. If so, we would have a pretty case of relativism. In this note I argue, however, that there is in fact not even disagreement in any interesting sense, let alone a faultless one, in spite of appearances and claims to the contrary. A position I call classical pluralism is sketched, (...) intended to provide a coherent methodological stance towards the issue. Some reasons to recommend this stance are given, as well as some speculations as to why not everyone might want to follow the recommendation. (shrink)
Economic page turners like Freakonomics are well written and there is much to be learned from them ? not only about economics, but also about writing techniques. Their authors know how to build up suspense, i.e., they make readers want to know what comes. An uncountable number of pages in books and magazines are filled with advice on writing reportages or suspense novels. While many of the tips are specific to the respective genres, some carry over to economic page turners (...) in an instructive way. After introducing some of these writing tools, I discuss whether these and other aspects of good writing lead to a biased presentation of economic theory and practice. I conclude that whatever the problems with certain economic page turners may be, they are not due to the need to write in an accessible, appealing way. (shrink)
The result of major research on development, security and culture, this collection, and second volume Sustainable Development in a Globalized World , outlines the emerging field of global studies and the theoretical approach of global social theory. It considers social relations and the need for intercultural dialogue to respect "the other.".
The frame/content theory justifiably makes tinkering an important explanatory principle. However, tinkering is linked to the accidental and, if completely decoupled from functional constraints, it could potentially play the role of an “idiosyncracy generator,” thus offering a sort of “evolutionary” alibi for the Chomskyan paradigm – the approach to language that MacNeilage most emphatically rejects. To block that line of reasoning, it should be made clear that evolutionary opportunism always operates within the constraints of selection.