Following Quine, Davidson, and Dennett, I take mental states and linguistic meaning to be individuated with reference to interpretation. The regulative principle of ideal interpretation is to maximize rationality, and this accounts for the distinctiveness and autonomy of the vocabulary of agency. This rationality-maxim can accommodate empirical cognitive-psychological investigation into the nature and limitations of human mental processing. Interpretivism is explicitly anti-reductionist, but in the context of Rorty's neo-pragmatism provides a naturalized view of agents. The interpretivist strategy affords a less (...) despondent view of constructive philosophical activity than Rorty's own. (shrink)
Insights into the history of chemistry Content Type Journal Article DOI 10.1007/s11016-010-9482-4 Authors Peter J. Ramberg, Truman State University, 100 E. Normal, Kirksville, MO 63501, USA Journal Metascience Online ISSN 1467-9981 Print ISSN 0815-0796.
Richard Rorty (1931–2007) developed a distinctive and controversial brand of pragmatism that expressed itself along two main axes. One is negative—a critical diagnosis of what Rorty takes to be defining projects of modern philosophy. The other is positive—an attempt to show what intellectual culture might look like, once we free ourselves from the governing metaphors of mind and knowledge in which the traditional problems of epistemology and metaphysics (and indeed, in Rorty's view, the self-conception of modern philosophy) are rooted. The (...) centerpiece of Rorty's critique is the provocative account offered in Philosophy and the Mirror of Nature (1979, hereafter PMN). In this book, and in the closely related essays collected in Consequences of Pragmatism (1982, hereafter CP), Rorty's principal target is the philosophical idea of knowledge as representation, as a mental mirroring of a mind-external world. Providing a contrasting image of philosophy, Rorty has sought to integrate and apply the milestone achievements of Dewey, Hegel and Darwin in a pragmatist synthesis of historicism and naturalism. Characterizations and illustrations of a post-epistemological intellectual culture, present in both PMN (part III) and CP (xxxvii-xliv), are more richly developed in later works, such as Contingency, Irony, and Solidarity (1989, hereafter CIS), in the popular essays and articles collected in Philosophy and Social Hope (1999), and in the four volumes of philosophical papers, Objectivity, Relativism, and Truth (1991, hereafter ORT); Essays on Heidegger and Others (1991, hereafter EHO); Truth and Progress (1998, hereafter TP); and Philosophy as Cultural Politics (2007, hereafter PCP). In these writings, ranging over an unusually wide intellectual territory, Rorty offers a highly integrated, multifaceted view of thought, culture, and politics, a view that has made him one of the most widely discussed philosophers in our time. (shrink)
The book is an “introductory” reconstruction of Davidson on interpretation —a claim to be taken with a grain of salt. Writing introductory books has become an idol of the tribe. This is a concise book and reflects much study. It has many virtues along with some flaws. Ramberg assembles themes and puzzles from Davidson into a more or less coherent viewpoint. A special virtue is the innovative treatment of incommensurability and of the relation of Davidson’s work to hermeneutic themes. (...) The weakness comes in a certain unevenness. While generally convincing and well written, the book has low points which may leave the reader confused or unconvinced. Davidson is the hero in this book, and our hero is sometimes over idealized. (shrink)
‘The Second Mistake’ (TSM) is to think that if an act is right or wrong because of its effects, the only relevant effects are the effects of this particular act. This is not (as some think) a truism, since ‘the effects of this particular act’ and ‘its effects’ need not co-refer. Derek Parfit's rejection of TSM is based mainly on intuitions concerning sets of acts that over-determine certain harms. In these cases, each act belongs to the relevant set in virtue (...) of a causal relation (other than marginal contribution) to a specific harmful event. This feature may make an act wrong, in a fashion consequentialists could admit. That explication of TSM does not rely on the questionable assumption that the set of acts is what harms here. Independently of this, there are several other reasons to prefer it to the ‘mere participation’ approach. Correspondence:c1 email@example.com. (shrink)
A broad range of evidence regarding the functional organization of the vertebrate brain – spanning from comparative neurology to experimental psychology and neurophysiology to clinical data – is reviewed for its bearing on conceptions of the neural organization of consciousness. A novel principle relating target selection, action selection, and motivation to one another, as a means to optimize integration for action in real time, is introduced. With its help, the principal macrosystems of the vertebrate brain can be seen to form (...) a centralized functional design in which an upper brain stem system organized for conscious function performs a penultimate step in action control. This upper brain stem system retained a key role throughout the evolutionary process by which an expanding forebrain – culminating in the cerebral cortex of mammals – came to serve as a medium for the elaboration of conscious contents. This highly conserved upper brainstem system, which extends from the roof of the midbrain to the basal diencephalon, integrates the massively parallel and distributed information capacity of the cerebral hemispheres into the limited-capacity, sequential mode of operation required for coherent behavior. It maintains special connective relations with cortical territories implicated in attentional and conscious functions, but is not rendered nonfunctional in the absence of cortical input. This helps explain the purposive, goal-directed behavior exhibited by mammals after experimental decortication, as well as the evidence that children born without a cortex are conscious. Taken together these circumstances suggest that brainstem mechanisms are integral to the constitution of the conscious state, and that an adequate account of neural mechanisms of conscious function cannot be confined to the thalamocortical complex alone. (Published Online May 1 2007) Key Words: action selection; anencephaly; central decision making; consciousness; control architectures; hydranencephaly; macrosystems; motivation; target selection; zona incerta. (shrink)
Contributing Authors: Lilli Alanen & Frans Svensson, David Alm, Gustaf Arrhenius, Gunnar Björnsson, Luc Bovens, Richard Bradley, Geoffrey Brennan & Nicholas Southwood, John Broome, Linus Broström & Mats Johansson, Johan Brännmark, Krister Bykvist, John Cantwell, Erik Carlson, David Copp, Roger Crisp, Sven Danielsson, Dan Egonsson, Fred Feldman, Roger Fjellström, Marc Fleurbaey, Margaret Gilbert, Olav Gjelsvik, Kathrin Glüer & Peter Pagin, Ebba Gullberg & Sten Lindström, Peter Gärdenfors, Sven Ove Hansson, Jana Holsanova, Nils Holtug, Victoria Höög, Magnus Jiborn, Karsten Klint Jensen, (...) Sigurður Kristinsson, Isaac Levi, Kasper Lippert-Rasmussen, David Makinson, Anna-Sofia Maurin, Philippe Mongin, Kevin Mulligan, Lennart Nordenfelt, Jonas Olson, Erik J. Olsson, Ingmar Persson, Johannes Persson, Björn Petersson, Philip Pettit, Hans Rott, Toni Rønnow-Rasmussen, Krister Segerberg, John Skorupski, Howard Sobel, Fredrik Stjernberg, Fred Stoutland, Caj Strandberg, Pär Sundström, Folke Tersman, Torbjörn Tännsjö, Peter Vallentyne, Bruno Verbeek, Stella Villarmea, and Michael J. Zimmerman. (shrink)
My response addresses general commentary themes such as my neglect of the forebrain contribution to human consciousness, the bearing of blindsight on consciousness theory, the definition of wakefulness, the significance of emotion and pain perception for consciousness theory, and concerns regarding remnant cortex in children with hydranencephaly. Further specific topics, such as phenomenal and phylogenetic aspects of mesodiencephalic-thalamocortical relations, are also discussed. (Published Online May 1 2007).
The relationship of the author's intention to the meaning of a literary work has been a persistently controversial topic in aesthetics. Anti-intentionalists Wimsatt and Beardsley, in the 1946 paper that launched the debate, accused critics who fueled their interpretative activity by poring over the author's private diaries and life story of committing the 'fallacy' of equating the work's meaning, properly determined by context and linguistic convention, with the meaning intended by the author. Hirsch responded that context and convention are not (...) sufficient to determine a unique meaning for a text; to avoid radical ambiguity we must appeal to the author's intention, which actualizes one of the candidate meanings. Subsequent writers have defended refined versions of these views, and a variety of positions on the spectrum between them, in a debate that remains central to philosophical aesthetics. While much of the debate has focused on literature, similar questions arise with respect to the interpretation of visual artworks. Some of the readings listed below address this matter explicitly. Author Recommends: William K. Wimsatt and Monroe C. Beardsley, 'The Intentional Fallacy', Sewanee Review 54 (1946): 468–88. Locus classicus of the anti-intentionalist position: Wimsatt and Beardsley hold that appeal to the author's intention is always extraneous, since intention cannot override the role of linguistic convention and context in determining meaning. Criticism, they argue, should thus proceed by careful examination of the literary work rather than by sifting through biographical material that might hint at the author's intentions. E. D. Hirsch, Jr., Validity in Interpretation (New Haven, CT: Yale University Press, 1967). The seminal statement of actual intentionalism: Hirsch holds that 'meaning is an affair of consciousness and not of physical signs or things' (23), though he allows that linguistic convention constrains the meanings the author can intend for a particular utterance. He argues that the author's intention is necessary to fix meaning, since the application of conventions alone would typically leave a text wildly indeterminate. Alexander Nehamas, 'The Postulated Author: Critical Monism as a Regulative Ideal', Critical Inquiry 8 (1981): 133–49. Nehamas argues for a version of hypothetical intentionalism according to which interpretation is a matter of attributing an intended meaning to a hypothetical author, distinct from the historical writer. This view allows the interpreter to find meaning even in features of the work that may have been mere accidents on the part of the historical writer. Gary Iseminger, ed., Intention and Interpretation (Philadelphia, PA: Temple University Press, 1992). Intention and Interpretation is an outstanding collection including both classic and new essays representing most of the major viewpoints in the debate. Noël Carroll, 'Art, Intention, and Conversation', Intention and Interpretation , ed. Gary Iseminger (Philadelphia, PA: Temple University Press, 1992), 97–131. The essay defends modest actual intentionalism, according to which the work's meaning is one compatible both with the author's meaning intentions and with the conventionally allowable meanings of the text. Carroll holds that literature is on a continuum with ordinary conversation, to which an intentionalist analysis is apt; for this reason he rejects anti-intentionalism and hypothetical intentionalism, which emphasize the purported autonomy of literary works from their authors. Daniel Nathan, 'Irony, Metaphor, and the Problem of Intention', Intention and Interpretation , ed. Gary Iseminger (Philadelphia, PA: Temple University Press, 1992), 183–202. Nathan argues that even irony and metaphor, which are often thought to require an analysis in terms of the author's actual intentions, are in fact best understood on an anti-intentionalist approach. Jerrold Levinson, 'Intention and Interpretation in Literature', The Pleasures of Aesthetics: Philosophical Essays (Ithaca, NY: Cornell University Press, 1996), 175–213. Revised version of 'Intention and Interpretation: A Last Look', Intention and Interpretation , ed. Gary Iseminger (Philadelphia, PA: Temple University Press, 1992), 221–56. The essay defends a version of hypothetical intentionalism according to which the meaning of a literary work is the meaning that would be attributed to the actual author by members of the ideal audience. Levinson argues that literary works should be treated differently from everyday utterances, since it is a convention of literature that its works are substantially autonomous from their authors. Paisley Livingston, Art and Intention: A Philosophical Study (Oxford: Clarendon Press, 2005). Livingston examines competing accounts of the nature of intentions as they pertain to a variety of issues in the philosophy of art, including the ontology of art, the nature of authorship, and art interpretation. In chapter 6, Livingston argues for partial intentionalism, according to which some, but not all, of a work's meanings are non-redundantly determined by the author's intentions. Stephen Davies, 'Authors' Intentions, Literary Interpretation, and Literary Value', British Journal of Aesthetics 46 (2006): 223–47. Davies defends the value-maximizing view, according to which, when there is more than one conventional meaning consistent with the work's features, the meaning that should be attributed to the work is the one that makes the work out to be most aesthetically valuable. He allows for the attribution of multiple meanings when more than one candidate (approximately) maximizes the work's value. Online Materials: http://plato.stanford.edu/entries/beardsley-aesthetics/ Beardsley's Aesthetics (Michael Wreen) http://plato.stanford.edu/entries/conceptual-art/ Conceptual Art (Elisabeth Schellekens) http://plato.stanford.edu/entries/speech-acts/ Speech Acts (Mitchell Green) http://plato.stanford.edu/entries/hermeneutics/ Hermeneutics (Bjørn Ramberg and Kristin Gjesdal) Sample Syllabus: Week 1: Foundations 1. Wimsatt and Beardsley, 'The Intentional Fallacy'. 2. Livingston, 'What Are Intentions?', Art and Intention , 1–30. Weeks 2–3: Actual Intentionalism 1. Hirsch, Validity in Interpretation , ch. 1–2, 1–67. 2. Gary Iseminger, 'An Intentional Demonstration?', Intention and Interpretation , ed. Iseminger, 76–96. Optional reading: 1. Stephen Knapp and Walter Benn Michaels, 'Against Theory', Critical Inquiry 8 (1982): 723–742. 2. Stephen Knapp and Walter Benn Michaels, 'Against Theory 2: Hermeneutics and Deconstruction', Critical Inquiry 14 (1987): 49–58. Weeks 4–5: Modest, Moderate and Partial Intentionalism 1. Carroll, 'Art, Intention, and Conversation'. 2. Robert Stecker, Interpretation and Construction: Art, Speech, and the Law (Malden, MA: Blackwell, 2003), ch. 2, 29–51. 3. Livingston, 'Intention and the Interpretation of Art', Art and Intention , 135–74. Optional reading: 1. Carroll, 'Interpretation and Intention: The Debate between Hypothetical and Actual Intentionalism', Metaphilosophy 31 (2000): 75–95. 2. Stecker, 'Moderate Actual Intentionalism Defended', Journal of Aesthetics and Art Criticism 64 (2006): 429–38. Weeks 6–7: Hypothetical Intentionalism 1. William E. Tolhurst, 'On What a Text Is and How It Means', British Journal of Aesthetics 19 (1979): 3–14. 2. Nehamas, 'Postulated Author'. 3. Levinson, 'Intention and Interpretation in Literature'. Optional reading: 1. Nehamas, 'What an Author Is', Journal of Philosophy 83 (1986): 685–91. 2. Nehamas, 'Writer, Text, Work, Author', Literature and the Question of Philosophy , ed. A. J. Cascardi (Baltimore, MD: Johns Hopkins University Press, 1987), 265–91. 3. Levinson, 'Hypothetical Intentionalism: Statement, Objections, and Replies', Is There a Single Right Interpretation? , ed. M. Krausz (University Park, PA: Pennsylvania State University Press, 2002), 309–18. Week 8: The Value-Maximizing View 1. Davies, 'The Aesthetic Relevance of Authors' and Painters' Intentions', Journal of Aesthetics and Art Criticism 41 (1982): 65–76. 2. Davies, 'Authors' Intentions, Literary Interpretation, and Literary Value'. Weeks 9–10: Anti-Intentionalism 1. Beardsley, 'The Authority of the Text,' The Possibility of Criticism (Detroit: Wayne State University Press, 1970), 16–37. 2. Nathan, 'Irony, Metaphor, and the Problem of Intention'. 3. Nathan, 'Art, Meaning, and Artist's Meaning', Contemporary Debates in Aesthetics and the Philosophy of Art , ed. M. Kieran (Malden, MA: Blackwell, 2006), 282–95. Optional reading: 1. Beardsley, 'Intentions and Interpretations: A Fallacy Revived', The Aesthetic Point of View: Selected Essays , ed. M. J. Wreen and D. M. Callen (Ithaca, NY: Cornell University Press, 1982), 188–207. 2. Nathan, 'Irony and the Author's Intentions', British Journal of Aesthetics 22 (1982): 246–56. Sample Mini-Syllabus: Week 1: Foundations 1. Wimsatt and Beardsley, 'The Intentional Fallacy'. 2. Livingston, 'What Are Intentions?', Art and Intention , 1–30. Week 2: Actual and Modest Intentionalism 1. Hirsch, Validity in Interpretation , ch. 1–2, 1–67. 2. Carroll, 'Art, Intention, and Conversation'. Week 3: Hypothetical Intentionalism and Anti-Intentionalism 1. Levinson, 'Intention and Interpretation in Literature'. 2. Nathan, 'Irony, Metaphor, and the Problem of Intention'. Focus Questions 1. Is the difficulty of ascertaining the author's intentions a good reason to reject actual intentionalism? 2. Should literary works be seen as largely autonomous from their authors, even if we think that interpretation of ordinary utterances is properly a matter of ascertaining the speaker's intentions? 3. Are linguistic context and convention sufficient to determine the meaning of a literary work, or is the author's intention required to stave off an unacceptable degree of ambiguity? 4. Should the author's intentions about the genre or category to which the work belongs have a different status than intentions about the work's meaning? 5. Can the author's intentions have a non-redundant role to play in fixing meaning even if we take the role of context and linguistic convention seriously? 6. Should we expect the author's intention to play the same role (if any) in the interpretation of visual artworks that it plays in the interpretation of literature, or do differences between these two art forms require distinct approaches? (shrink)
‘Natural selection’ is, it seems, an ambiguous term. It is sometimes held to denote a consequence of variation, heredity, and environment, while at other times as denoting a force that creates adaptations. I argue that the latter, the force interpretation, is a redundant notion of natural selection. I will point to difficulties in making sense of this linguistic practise, and argue that it is frequently at odds with standard interpretations of evolutionary theory. I provide examples to show this; one example (...) involving the relation between adaptations and other traits, and a second involving the relation between selection and drift. (shrink)
This article introduces compliance disclosure regimes to business ethics research. Compliance disclosure is a relatively recent regulatory technique whereby companies are obliged to disclose the extent to which they comply with codes, ‘best practice standards’ or other extra-legal texts containing norms or prospective norms. Such ‘compliance disclosure’ obligations are often presented as flexible regulatory alternatives to substantive, command-and-control regulation. However, based on a report on experiences of existing compliance disclosure obligations, this article will identify major weaknesses that prevent them from (...) becoming effective mechanisms to discipline a certain type of behaviour. It will be argued that regulatory recourse to compliance disclosure obligations is nonetheless worthwhile if we view them as mechanisms that can initiate a dialogue about norm interpretation, application and norm desirability. From this perspective, compliance disclosure obligations serve less to discipline companies by making corporate practices transparent, and more to trigger a process of norm development, in which the law, companies and their stakeholders interact. This article provides an illustration of how mandatory disclosure, if it is restricted to a unilateral communication process, may produce no effective results (or even prove counterproductive), whilst highlighting the alternative potential of disclosure as an initiator of dialogue, supported by laws, geared towards the development and refinement of norms applicable to business in a global context and the values they promote. (shrink)
The focus of this article is the analysis of generative mechanisms, a basic concept and phenomenon within the metatheoretical perspective of critical realism. It is emphasized that research questions and methods, as well as the knowledge it is possible to attain, depend on the basic view – ontologically and epistemologically – regarding the phenomenon under scrutiny. A generative mechanism is described as a trans empirical but real existing entity, explaining why observable events occur. Mechanisms are mostly possible to grasp only (...) indirectly by analytical work (theory-building), based however on empirical observations. In order to achieve such an explanatory analysis, five methodological steps are suggested and discussed, among them abduction and retroduction. These steps are illustrated throughout by examples drawn from empirical research regarding social work practice. The article is concluded with a discussion of the need for knowledge of generative mechanisms. (shrink)
Can it be better or worse for a person to be than not to be, that is, can it be better or worse to exist than not to exist at all? This old 'existential question' has been raised anew in contemporary moral philosophy. There are roughly two reasons for this renewed interest. Firstly, traditional so-called “impersonal” ethical theories, such as utilitarianism, have counter-intuitive implications in regard to questions concerning procreation and our moral duties to future, not yet existing people. Secondly, (...) it has seemed evident to many that an outcome can only be better than another if it is better for someone, and that only moral theories that are in this sense “person affecting” can be correct. The implications of this Person Affecting Restriction will differ radically, however, depending on which answer one gives to the existential question. Melinda Roberts (2003) and Matthew Adler (2009) have defended an affirmative answer to the existential question using an assumption that one can asribe a zero level of wellbeing to a person in a world in which that person doesn't exist. Contrariwise, Derek Parfit (1984), John Broome (1999), and others have worried that if we take a person’s life to be better for her than non-existence, then we would have to conclude that it would have been worse for her if she did not exist, which is absurd: Nothing would have been worse or better for a person if she had not existed. The paper suggests that an affirmative answer to the existential question can avoid such absurdities: One can claim that, say, it is better for a person to exist than not to exist, without implying that it would have been worse for a person if she had not existed or that her level of wellbeing would then have been lower. (shrink)
Boyer & Lienard's (B&L's) biological model of ritual achieves a rather straightforward account of features shared by ritual pathology and the idiosyncratic rituals of children; but complexities accrue in extending it to human ritual culture generally. My commentary suggests that the ritual cultural traditions of animals such as songbirds share structural features, handicap-based origin, as well as the enabling neural mechanism of vocal learning with human ritual culture. (Published Online February 8 2007).
The processes of economic integration induced by globalization have brought about a certain type of legal practice that challenges the core values of legal ethics. Law firms seeking to represent the interests of internationally active corporate clients must embrace and systematically apply concepts of strategic management and planning and install corporate business structures to sustain competition for lucrative clients. These measures bear a high conflict potential with the core values of legal ethics. However, we observe in parallel a global consolidation (...) of these core values through an enhanced cooperation of national professional bodies, the use of international codes, and comparative legal ethics teaching and research. Furthermore, state regulation of the legal profession is concerned with preserving the core values of legal ethics to conserve the lawyer's role in upholding the rule of law. This article defends that legal ethics is adapting to the pressures exerted by "managerial" approaches to legal practice without this altering core values that underlie legal ethics. (shrink)
"Forgetting" plays an important role in the lives of individuals and communities. Although a few Holocaust scholars have begun to take forgetting more seriously in relation to the task of remembering—in popular parlance as well as in academic discourse on the Holocaust—forgetting is usually perceived as a negative force. In the decades following 1945, the terms remembering and forgetting have often been used antithetically, with the communities of victims insisting on the duty to remember and a society of perpetrators desiring (...) to forget. Thus, the discourse on Holocaust memory has become entrenched on this issue. This essay counters the swift rejection of forgetting and its labeling as a reprehensible act. It calls attention to two issues: first, it offers a critical argument for different forms of forgetting; second, it concludes with suggestions of how deliberate performative practices of forgetting might benefit communities affected by a genocidal past. Is it possible to conceive of forgetting not as the ugly twin of remembering but as its necessary companion? (shrink)
An impressive amount of evidence from psychology, cognitive neurology, evolutionary psychology and primatology seems to be converging on a ‘dual process’ model of moral or practical (in the philosophical sense) psychology according to which our practical judgments are generated by two distinct processes, one ‘emotive-intuitive’ and one ‘cognitive-utilitarian’. In this paper I approach the dual process model from several directions, trying to shed light on various aspects of our moral and practical lives.
The paper describes and defends an eclectic approach to narrative explanation in history and social sciences (as well as in natural history). The view of narrative explanation defended allows combinations of several recent ideas concerning the nature of narrative explanation.The guiding idea is that the explanatory power of narratives consists in their capacity to accommodate various forms of explanations and interpretations. Narrative explanations are seen as theories abouthappenings that may consist of diverse forms of explanations, interpretations and explanation sketches. There (...) is no single form of narrative explanation, rather narrative is seen as a form tor synthesizing various explanations.Several problems concerning explanation and narrative are discussed with relation to the proposed approach: laws in explanations, literary or fictional aspects of narratives, relativism, constructivism and noncognitivism or antirealism. Hayden White’s theory of the explanatory role of “emplotment” is discussed and criticized.The upshot is that the eclectic approach defended does not face any problems unique to it: problems faced are general epistemological problem. The literary aspects of historical narrative are interpreted as normative and rhetorical, making the relevance of these aspects tor narrative explanation depend on the question whether there are legitimate moral explanations. (shrink)
The communication of emotion in music has with few exceptions, as L. B. Meyer´s Emotion and Meaning in Music (1956) and the contour theory (Kivy 1989, 2002), focused on music structure as representations of emotions. This implies a semiotic approach - the assumption that music is a kind of language that could be read and decoded. Such an approach is largely restricted to the conscious level of knowing, understanding and communication. We suggest an understanding of music and emotion based on (...) action-perception theory - present moment perception, implicit knowledge and imitation. This theory does not demand consciousness or the use of signs. Neuroscientific findings (adaptive oscillators, mirror neurons) are in concordance with our suggestion. Recently these findings have generated articles on empathy – relevant to the understanding of music and emotion. (shrink)
It is a plain fact that biology makes use of terms and expressions commonly spoken of as teleological. Biologists frequently speak of the function of biological items. They may also say that traits are 'supposed to' perform some of their effects, claim that traits are 'for' specific effects, or that organisms have particular traits 'in order to' engage in specific interactions. There is general agreement that there must be something useful about this linguistic practice but it is controversial whether it (...) is entirely appropriate, and if so why it is.Many theorists have defended the use of seemingly teleological terms by appeal to an etiological notion of function (Wright, 1973; Millikan, 1984, 2002; Neander, 1991; .. (shrink)
Sometimes it seems intuitively plausible to hold loosely structured sets of individuals morally responsible for failing to act collectively. Virginia Held, Larry May, and Torbj rn T nnsj have all drawn this conclusion from thought experiments concerning small groups, although they apply the conclusion to large-scale omissions as well. On the other hand it is commonly assumed that (collective) agency is a necessary condition for (collective) responsibility. If that is true, then how can we hold sets of people responsible for (...) not having acted collectively? This paper argues that that loosely structured inactive groups sometimes meet this requirement if we employ a weak (but nonetheless non-reductionist) notion of collective agency. This notion can be defended on independent grounds. The resulting position on distribution of responsibility is more restrictive than Held's, May's or T nnsj 's, and this consequence seems intuitively attractive. (shrink)
. Björn Lindbloms account of the emergence of phonemic structure is a central reference point in contemporary discussions of the emergence of language. I argue that there are two distinct, and largely orthogonal conceptions of emergence implicit in Lindbloms account. According to one conception (causal emergence), the process by which minimal pairs are generated is crucial to the claim that phonemic structure is emergent; according to the other conception (analytic emergence), the fact that segments are an abstraction from the physical (...) signal is what is crucial to the description of phonemic structure as emergent. The purpose of distinguishing rather than conflating these two conceptions of emergence is not in the first instance to criticize Lindbloms account or to force us to choose between the two conceptions for consistency, but rather to give us a more detailed purchase on the notoriously thorny concept of emergent explanation. (shrink)
Abstract: By focusing on contributions to the literature on function ascription, this article seeks to illustrate two problems with philosophical accounts that are presented as having descriptive aims. There is a motivational problem in that there is frequently no good reason why descriptive aims should be important, and there is a methodological problem in that the methods employed frequently fail to match the task description. This suggests that the task description as such may be the result of “default descriptivism,” a (...) tendency to take considerations that make sense of a practice to be the very considerations that generate it. Although such hypotheses are frequently quite plausible, the fact of the matter may not be very important for the pursuits of philosophers. (shrink)
It is often stated that the image of the world which our senses present to us contradicts the scientific worldview in important respects. I challenge this position through a number of arguments centered on the nature of perception and of perceived qualities.
Portable Grammar Format (PGF) is a core language for type-theoretical grammars. It is the target language to which grammars written in the high-level formalism Grammatical Framework (GF) are compiled. Low-level and simple, PGF is easy to reason about, so that its language-theoretic properties can be established. It is also easy to write interpreters that perform parsing and generation with PGF grammars, and compilers converting PGF to other formats. This paper gives a concise description of PGF, covering syntax, semantics, and parser (...) generation. It also discusses the technique of embedded grammars, where language processing tasks defined by PGF grammars are integrated in larger systems. (shrink)
In this paper, we try to shed light on the ontological puzzle pertaining to models and to contribute to a better understanding of what models are. Our suggestion is that models should be regarded as a specific kind of signs according to the sign theory put forward by Charles S. Peirce, and, more precisely, as icons, i.e. as signs which are characterized by a similarity relation between sign (model) and object (original). We argue for this (1) by analyzing from a (...) semiotic point of view the representational relation which is characteristic of models. We then corroborate our hypothesis (2) by discussing the conceptual differences between icons, i.e. models, and indexical and symbolic signs and (3) by putting forward a general classification of all icons into three functional subclasses (images, diagrams, and metaphors). Subsequently, we (4) integratively refine our results by resorting to two influential and, as can be shown, complementary philosophy of science approaches to models. This yields the following result: models are determined by a semiotic structure in which a subject intentionally uses an object, i.e. the model, as a sign for another object, i.e. the original, in the context of a chosen theory or language in order to attain a specific end by instituting a representational relation in which the syntactic structure of the model, i.e. its attributes and relations, represents by way of a mapping the properties of the original, which hence are regarded as similar in a relevant manner. (shrink)
A non-productivist Marxism departing from the analysis of capitalism’s “dialectic of scarcity” can make a valuable contribution to the field of environmental ethics. On the one hand, the analysis of capitalism’s dialectic of scarcity shows that the ethical yardstick by which capitalism should be measured is immanent in this social system’s dynamic tendencies. On the other hand, this analysis exposes capitalism’s inability to fulfill the potential for an ecologically sustainable society without unnecessary human suffering that capitalism’s technological dynamism generates. This (...) argument can be illustrated by a critical analysis of Bjorn Lomborg’s The Skeptical Environmentalist. An exploration of capitalism’s dialectic of scarcity can bring to light those weaknesses and internal contradictions of antiecological discourses that are likely to escape the attention of non-Marxist ecologists. This analysis shows that to the extent capitalism’s dialectic of scarcity encourages the fragmentation of social justice and environmental movements, a critical analysis of this dialectic can contribute to the formation of the alliance of emancipatory movements that the attainment of a just and ecologically sustainable society presupposes. (shrink)
Ve svém článku 'Hledání hyperintenzionality' se Bjorn Jespersen pokusil rekapitulovat a zdůvodnit způsob, jakým TIL vykládá pojem významu. Jakkoli si myslím, že se Jespersenovi (nikoli jenom v tomto článku) daří TIL předvádět způsobem srozumitelným i přitažlivým i pro outsidery..
Summary In two articles Friedrich Rapp argues that there is a methodological symmetry between falsification and verification in contradistinction to the logical asymmetry that obtains between them. (The Methodological Symmetry between Verification and Falsification,Ztschr. f. Allg. Wissth., Band VI/1 (1975), pp 139â144; A Helpful Argument â Reply to K. Eichner,Ztschr. f. Allg. Wissth., Band VII/1 (1976), pp. 121â123). Rapp puts forward the thesis that methodological falsification of a theory T implies the acceptance of an inference from ~ (x) Tx to (...) (x) ~ Tx. However, this thesis does not have to be accepted even if the premises of Rapp's argument were accepted. Furthermore, Rapp has not shown that the falsification of a theory T implies that T will not be retained. Neither has Rapp formulated assumptions that are sufficient to guarantee that the outcome of an intended test of a theory T can be considered as an outcome of an actual test of T. (shrink)
Using an articulatory model we show that locus equations make special use of the phonetic space of possible locus patterns. There is nothing articulatorily inevitable about their linearity or slope- intercept characteristics. Nonetheless, articulatory factors do play an important role in the origin of simulated locus equations, but they cannot, by themselves, provide complete explanations for the observed facts. As in other domains, there is interaction between perceptual and motor factors.
The goal of the present set of studies is to explore the boundary conditions of category transfer in causal learning. Previous research has shown that people are capable of inducing categories based on causal learning input, and they often transfer these categories to new causal learning tasks. However, occasionally learners abandon the learned categories and induce new ones. Whereas previously it has been argued that transfer is only observed with essentialist categories in which the hidden properties are causally relevant for (...) the target effect in the transfer relation, we here propose an alternative explanation, the unbroken mechanism hypothesis. This hypothesis claims that categories are transferred from a previously learned causal relation to a new causal relation when learners assume a causal mechanism linking the two relations that is continuous and unbroken. The findings of two causal learning experiments support the unbroken mechanism hypothesis. (shrink)
In discussions of moral responsibility for collectively produced effects, it is not uncommon to assume that we have to abandon the view that causal involvement is a necessary condition for individual co-responsibility. In general, considerations of cases where there is “a mismatch between the wrong a group commits and the apparent causal contributions for which we can hold individuals responsible” motivate this move. According to Brian Lawson, “solving this problem requires an approach that deemphasizes the importance of causal contributions”. Christopher (...) Kutz’s theory of complicitious accountability in Complicity from 2000 is probably the most well-known approach of that kind. Standard examples are supposed to illustrate mismatches of three different kinds: an agent may be morally co-responsible for an event to a high degree even if her causal contribution to that event is a) very small, b) imperceptible, or c) non-existent (in overdetermination cases). From such examples, Kutz and others conclude that principles of complicitious accountability cannot include a condition of causal involvement. In the present paper, I defend the causal involvement condition for co-responsibility. These are my lines of argument: First, overdetermination cases can be accommodated within a theory of coresponsibility without giving up the causality condition. Kutz and others oversimplify the relation between counterfactual dependence and causation, and they overlook the possibility that causal relations other than marginal contribution could be morally relevant. Second, harmful effects are sometimes overdetermined by non-collective sets of acts. Over-farming, or the greenhouse effect, might be cases of that kind. In such cases, there need not be any formal organization, any unifying intentions, or any other noncausal criterion of membership available. If we give up the causal condition for coresponsibility it will be impossible to delimit the morally relevant set of acts related to those harms. Since we sometimes find it fair to blame people for such harms, we must question the argument from overdetermination. Third, although problems about imperceptible effects or aggregation of very small effects are morally important, e.g. when we consider degrees of blameworthiness or epistemic limitations in reasoning about how to assign responsibility for specific harms, they are irrelevant to the issue of whether causal involvement is necessary for complicity. Fourth, the costs of rejecting the causality condition for complicity are high. Causation is an explicit and essential element in most doctrines of legal liability and it is central in common sense views of moral responsibility. Giving up this condition could have radical and unwanted consequences for legal security and predictability. However, it is not only for pragmatic reasons and because it is a default position that we should require stronger arguments (than conflicting intuitions about “mismatches”) before giving up the causality condition. An essential element in holding someone to account for an event is the assumption that her actions and intentions are part of the explanation of why that event occurred. If we give up that element, it is difficult to see which important function responsibility assignments could have. (shrink)
It is not unreasonable to think that the dispute between classical and intuitionistic mathematics might be unresolvable or 'faultless', in the sense of there being no objective way to settle it. If so, we would have a pretty case of relativism. In this note I argue, however, that there is in fact not even disagreement in any interesting sense, let alone a faultless one, in spite of appearances and claims to the contrary. A position I call classical pluralism is sketched, (...) intended to provide a coherent methodological stance towards the issue. Some reasons to recommend this stance are given, as well as some speculations as to why not everyone might want to follow the recommendation. (shrink)
Economic page turners like Freakonomics are well written and there is much to be learned from them ? not only about economics, but also about writing techniques. Their authors know how to build up suspense, i.e., they make readers want to know what comes. An uncountable number of pages in books and magazines are filled with advice on writing reportages or suspense novels. While many of the tips are specific to the respective genres, some carry over to economic page turners (...) in an instructive way. After introducing some of these writing tools, I discuss whether these and other aspects of good writing lead to a biased presentation of economic theory and practice. I conclude that whatever the problems with certain economic page turners may be, they are not due to the need to write in an accessible, appealing way. (shrink)
The result of major research on development, security and culture, this collection, and second volume Sustainable Development in a Globalized World , outlines the emerging field of global studies and the theoretical approach of global social theory. It considers social relations and the need for intercultural dialogue to respect "the other.".
The frame/content theory justifiably makes tinkering an important explanatory principle. However, tinkering is linked to the accidental and, if completely decoupled from functional constraints, it could potentially play the role of an “idiosyncracy generator,” thus offering a sort of “evolutionary” alibi for the Chomskyan paradigm – the approach to language that MacNeilage most emphatically rejects. To block that line of reasoning, it should be made clear that evolutionary opportunism always operates within the constraints of selection.