Religious believers understand the meaning of their lives and of the world in terms of the way these are related to God. How, Vincent BrU;mmer asks, does the model of love apply to this relationship? He shows that most views on love take it to be an attitude rather than a relationship: exclusive attention (Ortega y Gasset), ecstatic union (nuptial mysticism), passionate suffering (courtly love), need-love (Plato, Augustine) and gift-love (Nygren). In discussing the issues, BrU;mmer inquires what role these (...) attitudes play within the love-relationship and examines the implications of using the model of love as a key paradigm in theology. (shrink)
This short work shows how systematic theology is itself a philosophical enterprise. After analyzing the nature of philosophical enquiry and its relation to systematic theology, and after explaining how theology requires that we talk about God, Vincent BrU;mmer illustrates how philosophical analysis can help in dealing with various conceptual problems involved in the fundamental Christian claim that God is a personal being with whom we may live in a personal relationship.
There are legitimate worries about gaps between scientific evidence of brain states and function (for example, as evidenced by fMRI data) and legal criteria for determining criminal culpability. In this paper I argue that behavioral evidence of capacity, motive and intent appears easier for judges and juries to use for purposes of determining criminal liability because such evidence triggers the application of commonsense psychological (CSP) concepts that guide and structure criminal responsibility. In contrast, scientific evidence of neurological processes and function (...) – such as evidence that the defendant has a large brain tumor – will not generally lead a judge or jury to directly infer anything that is relevant to the legal determination of criminal culpability . (Vincent 2008) In these cases, an expert witness will be required to indicate to the fact-finder what this evidence means with regard to mental capacity; and then another inference will have to be made from this possible lack of capacity to the legal criteria for guilt, cast in CSP terms.<br><br>To reliably link evidence of brain function and structure and assessment of criminal responsibility, we need to re-conceptualize the mental capacities necessary for responsibility, particularly those that are recognized as missing or compromised by the doctrines of “legal capacity” (Hart 1968) and “diminished capacity.” I argue that formulating these capacities as executive functions within the brain can provide this link. I further claim that it would be extremely useful to consider evidence of executive function as related to the diminished capacity doctrine at sentencing. This is because it is primarily at this stage in criminal proceedings where the use of the diminished capacity doctrine is most prevalent, as evidenced by the recent Supreme Court cases of Atkins v. Virginia (536 U.S. 304 (2002)) and Roper v. Simmons (543 U.S. 551 (2005)).<br>. (shrink)
This paper offers a critical reconsideration of the traditional doctrine that responsibility for a crime requires a voluntary act. I defend three general propositions: first, that orthodox Anglo-American criminal theory fails to explain adequately why criminal responsibility requires an act. Second, when it comes to the just definition of crimes, the act requirement is at best a rough generalization rather than a substantive limiting principle. Third, that the intuition underlying the so-called “act requirement” is better explained by what I call (...) the “practical-agency condition,” according to which punishment in a specific instance is unjust unless the crime charged was caused or constituted by the agent's conduct qua practically rational agent. The practical-agency condition is defended as a reconstruction of what is worth retaining in Anglo-American criminal law's traditional notion of an “act requirement.”. (shrink)
It is now a problem more or less universally acknowledged that religion, even in an ostensibly secular age, must be in need of good commentary. The underlying problem is: what would constitute good commentary at this point? It is not as if religion has just appeared on the horizon of the secular intellectual. Even if we restrict our purview to nonreligious, nontheological discourse, there is a long tradition of critical appraisals and histories of religious phenomena, dating from the ancient Greeks. (...) The field receives an intellectual boost of sorts in the late eighteenth, the nineteenth, and the early twentieth centuries, as the religion of the theologians and prophetic reformers gives way to anthropological and sociological disciplines, the better to be scientifically understood and codified. This upsurge in the secular accounting for religious belief is often explained as the result of the Enlightenment—that is, materialist explanations of nature, textual authority, and psychology eventually turn religion into a natural function of human will, a series of authorial inventions, and a psychological manifestation of deeper impulses, from love, to class-based self-narcotizing illusion, to fear of the loss of paternal care. Max Weber proposed the most intriguing and far-reaching hypothesis about how the Enlightenment superseded religion in the West: Protestant reform within Christianity itself—beginning with Luther and Calvin—designed to produce a purer and far less magical, mystical, hierarchic, and corrupt system of belief, had the unintended consequence of laying the psychological foundations for ascetic capitalism, and hence the seemingly inevitable decline of religion in favor of worldly pursuits. (shrink)
This article presents results of exploratory research conducted with managers from over 500 Norwegian companies to examine corporate motives for engaging in social initiatives. Three key questions were addressed. First, what do managers in this sample see as the primary reasons their companies engage in activities that benefit society? Second, do motives for such social initiative vary across the industries represented? Third, can further empirical support be provided for the theoretical classifications of social initiative motives outlined in the literature? Previous (...) research on the topic is reviewed, study methods are described, results, are presented, and implications of findings are discussed. The article concludes with the analysis of study limitations and directions for future research. (shrink)
Vincent Descombes is a French philosopher. He has taught at the University of Montréal, Johns Hopkins University, and Emory University. Presently, he is director of studies at the École des Hautes Études en Sciences Sociales, Paris, and regular visiting professor at the University of Chicago in the Department of Romance. Descombes’s main areas of research are in the philosophy of mind, philosophy of language and philosophy of literature. The following interview covers various aspects of his research in the philosophy (...) of mind and language: semantic anti-realism, phenomenology, the content of mental states, description and transparency, the linguistic turn, metaphysics and linguistic analysis, fictional names and animal intentionality. (shrink)
Linguistic intuitive judgements are the de facto data source of choice within generative linguistics. But why we are justified in relying on intuitive judgements as evidence for grammars? In the philosophy of linguistics, this question has been hotly debated. I argue that the three most prominent views of that debate all have their problems. Devitt’s Modest Explanation accounts for the wrong kind of intuitive judgements. The Voice of Competence view and Rey’s account both lack independent evidence. I introduce and defend (...) a novel proposal that accounts for the evidential role of linguistic intuitive judgements and avoids these shortcomings. On this account, linguistic intuitive judgements are reports of the speaker’s immediate experience of trying to comprehend the sentence. This experience is due to the speaker’s linguistic competence, at least in part, and so the justification for the evidential use of linguistic intuitions ultimately comes from the speaker’s competence. However, the account does not rely on any special input from the speaker’s competence being available as the basis for linguistic intuitive judgements. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. We thus (...) designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...) by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
Mainstream and Formal Epistemology provides the first, easily accessible, yet erudite and original analysis of the meeting point between mainstream and formal theories of knowledge. These two strands of thinking have traditionally proceeded in isolation from one another, but in this book, Vincent F. Hendricks brings them together for a systematic comparative treatment. He demonstrates how mainstream and formal epistemology may significantly benefit from one another, paving the way for a new unifying program of 'plethoric' epistemology. His book will (...) both define and further the debate between philosophers from two very different sides of the epistemological spectrum. (shrink)
This open access book looks at how a democracy can devolve into a post-factual state. The media is being flooded by populist narratives, fake news, conspiracy theories and make-believe. Misinformation is turning into a challenge for all of us, whether politicians, journalists, or citizens. In the age of information, attention is a prime asset and may be converted into money, power, and influence – sometimes at the cost of facts. The point is to obtain exposure on the air and in (...) print media, and to generate traffic on social media platforms. With information in abundance and attention scarce, the competition is ever fiercer with truth all too often becoming the first victim. Reality Lost: Markets of Attention, Misinformation and Manipulation is an analysis by philosophers Vincent F. Hendricks and Mads Vestergaard of the nuts and bolts of the information market, the attention economy and media eco-system which may pave way to postfactual democracy. Here misleading narratives become the basis for political opinion formation, debate, and legislation. To curb this development and the threat it poses to democratic deliberation, political self-determination and freedom, it is necessary that we first grasp the mechanisms and structural conditions that cause it. (shrink)
In this paper, we propose a Simondonian interpretation of quantum mechanics taking as a standpoint his “preindividual hypothesis” in order to consider the problem of contextuality. We will examine whether the epistemological obstacle produced by the notion of entity can be bypassed by specifying, according to Simondon and the Kochen-Specker Theorem, the mode of existence of quantum potentialities.
This is a critical introduction to modern French philosophy, commissioned from one of the liveliest contemporary practitioners and intended for an English-speaking readership. The dominant 'Anglo-Saxon' reaction to philosophical development in France has for some decades been one of suspicion, occasionally tempered by curiosity but more often hardening into dismissive rejection. But there are signs now of a more sympathetic interest and an increasing readiness to admit and explore shared concerns, even if these are still expressed in a very different (...) idiom and intellectual context. Vincent Descombes offers here a personal guide to the main movements and figures of the last forty-five years. He traces over this period the evolution of thought from a generation preoccupied with the 'three H's' - Hegel, Husserl and Heidegger, to a generation influenced since about 1960 by the 'three masters of suspicion' - Marx, Nietzsche and Freud. In this framework he deals in turn with the thought of Sartre, Merleau-Ponty, the early structuralists, Foucault, Althusser, Serres, Derrida, and finally Deleuze and Lyotard. The 'internal' intellectual history of the period is related to its institutional setting and the wider cultural and political context which has given French philosophy so much of its distinctive character. (shrink)
The author of this paper explores a central strand in the complex relationship between Peirce and Kant. He argues, against Kant, that the practical identity of the self-critical agent who undertakes a Critic of reason needs to be conceived in substantive, not purely formal, terms. Thus, insofar as there is a reflexive turn in Peirce, it is quite far from the transcendental turn taken by Immanuel Kant. The identity of the being devoted to redefining the bounds of reason is not (...) that of a disembodied, rational will giving laws to itself. Nor is it that of a being whose passions and especially sentiments are heteronomous determinations of the deliberative agency in question. Rather the identity of this being is that of a somatic, social, and historical agent whose very autonomy not only traces its origin to heteronomy but also ineluctably involves an identification with what, time and again, emerges as other than this agent.A strong claim is made regarding human identity being practical identity. An equally strong claim is made regarding the upshot of Peirce's decisive movement beyond Kant's transcendental project: this movement unquestionably drives toward a compelling account of human agency. (shrink)
The philosophy of perception currently considers how perception relates to action. Some distinctions may help, distinguishing object perception from perceptual recognition, and both from that-perception. Examples are seeing a man, recognising a man, and seeing that there is a man. Perceiving an object controls self-location by its recognising an object, which depends on memory of how it looks, controls looking for it and interacting with it, or not, and that-perceiving controls saying that an object exists. Perception controls action. Milner and (...) Goodale, Jacob and Jeannerod, and Noë are considered. (shrink)
I am honoured and pleased to address you this evening on the life and work of an extraordinary American thinker, Charles Sanders Peirce. Although Peirce is perhaps most often remembered as the father of the philosophical movement known as pragmatism, I would like to impress upon you that he was also, and perhaps, especially, a logician, a working scientist and a mathematician. During his life time Peirce most often referred to himself, and was referred to by his colleagues, as a (...) logician. Furthermore, Peirce spent thirty years actively engaged in scientific research for the US Coast Survey. The National Archives in Washington, DC, holds some five thousand pages of Peirce's reports on this work. Finally, the four volumes of Peirce's mathematical papers edited by Professor Carolyn Eisele eloquently testify to his contributions to that field as well. (shrink)
How do people make sense of their experiences? How do they understand possibility? How do they limit possibility? These questions are central to all the human sciences. Here, Vincent Crapanzano offers a powerfully creative new way to think about human experience: the notion of imaginative horizons. For Crapanzano, imaginative horizons are the blurry boundaries that separate the here and now from what lies beyond, in time and space. These horizons, he argues, deeply influence both how we experience our lives (...) and how we interpret those experiences, and here sets himself the task of exploring the roles that creativity and imagination play in our experience of the world. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues (...) this volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of Artificial Intelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
Will future lethal autonomous weapon systems (LAWS), or ‘killer robots’, be a threat to humanity? The European Parliament has called for a moratorium or ban of LAWS; the ‘Contracting Parties to the Geneva Convention at the United Nations’ are presently discussing such a ban, which is supported by the great majority of writers and campaigners on the issue. However, the main arguments in favour of a ban are unsound. LAWS do not support extrajudicial killings, they do not take responsibility away (...) from humans; in fact they increase the abil-ity to hold humans accountable for war crimes. Using LAWS in war would probably reduce human suffering overall. Finally, the availability of LAWS would probably not increase the probability of war or other lethal conflict—especially as compared to extant remote-controlled weapons. The widespread fear of killer robots is unfounded: They are probably good news. (shrink)
The relation between Martin Heidegger and radical environmentalism has been subject of discussion for several years now. On the one hand, Heidegger is portrayed as a forerunner of the deep ecology movement, providing an alternative for the technological age we live in. On the other, commentators contend that the basic thrust of Heidegger’s thought cannot be found in such an ecological ethos. In this article, this debate is revisited in order to answer the question whether it is possible to conceive (...) human dwelling on earth in a way which is consistent with the technological world we live in and heralds another beginning at the same time. Our point of departure in this article is not the work of Heidegger but the affordance theory of James Gibson, which will prove to be highly compatible with the radical environmentalist concept of nature as well as Heidegger’s concept of the challenging of nature. (shrink)
We interpret solution rules on a class of simple allocation problems as data on the choices of a policy maker. We analyze conditions under which the policy maker’s choices are (i) rational (ii) transitive-rational, and (iii) representable; that is, they coincide with maximization of a (i) binary relation, (ii) transitive binary relation, and (iii) numerical function on the allocation space. Our main results are as follows: (i) a well-known property, contraction independence (a.k.a. IIA) is equivalent to rationality; (ii) every contraction (...) independent and other-c monotonic rule is transitive-rational; and (iii) every contraction independent and other-c monotonic rule, if additionally continuous, can be represented by a numerical function. (shrink)
The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
: The author of this paper explores a central strand in the complex relationship between Peirce and Kant. He argues, against Kant (especially as reconstructed by Christine Korsgaard), that the practical identity of the self-critical agent who undertakes a Critic of reason (as Peirce insisted upon translating this expression) needs to be conceived in substantive, not purely formal, terms. Thus, insofar as there is a reflexive turn in Peirce, it is quite far from the transcendental turn taken by Immanuel Kant. (...) The identity of the being devoted to redefining the bounds of reason (for the drawing of such bounds is always a historically situated and motivated undertaking) is not that of a disembodied, rational will giving laws to itself. Nor is it that of a being whose passions and especially sentiments are heteronomous determinations of the deliberative agency in question. Rather the identity of this being is that of a somatic, social, and historical agent whose very autonomy not only traces its origin to heteronomy but also ineluctably involves an identification with what, time and again, emerges as other than this agent. A strong claim is made regarding human identity being practical identity (practical identity being understood here as the singular shape acquired by a human being in the complex course of its practical involvements, its participation in the array of practices in and through which such a being carries out its life). An equally strong claim is made regarding the upshot of Peirce's decisive movement beyond Kant's transcendental project: this movement unquestionably drives toward a compelling account of human agency. (shrink)
In this paper it is argued that existing ‘self-representational’ theories of phenomenal consciousness do not adequately address the problem of higher-order misrepresentation. Drawing a page from the phenomenal concepts literature, a novel self-representational account is introduced that does. This is the quotational theory of phenomenal consciousness, according to which the higher-order component of a conscious state is constituted by the quotational component of a quotational phenomenal concept. According to the quotational theory of consciousness, phenomenal concepts help to account for the (...) very nature of phenomenally conscious states. Thus, the paper integrates two largely distinct explanatory projects in the field of consciousness studies: (i) the project of explaining how we think about our phenomenally conscious states, and (ii) the project of explaining what phenomenally conscious states are in the first place. (shrink)
The contribution of the body to cognition and control in natural and artificial agents is increasingly described as “off-loading computation from the brain to the body”, where the body is said to perform “morphological computation”. Our investigation of four characteristic cases of morphological computation in animals and robots shows that the ‘off-loading’ perspective is misleading. Actually, the contribution of body morphology to cognition and control is rarely computational, in any useful sense of the word. We thus distinguish (1) morphology that (...) facilitates control, (2) morphology that facilitates perception and the rare cases of (3) morphological computation proper, such as ‘reservoir computing.’ where the body is actually used for computation. This result contributes to the understanding of the relation between embodiment and computation: The question for robot design and cognitive science is not whether computation is offloaded to the body, but to what extent the body facilitates cognition and control – how it contributes to the overall ‘orchestration’ of intelligent behaviour. (shrink)
The human intestinal ecosystem, previously called the gut microflora is now known as the Human Gut Microbiota. Microbiome research has emphasized the potential role of this ecosystem in human homeostasis, offering unexpected opportunities in therapeutics, far beyond digestive diseases. It has also highlighted ethical, social and commercial concerns related to the gut microbiota. As diet factors are accepted to be the major regulator of the gut microbiota, the modulation of its composition, either by antibiotics or by food intake, should be (...) regarded as a fascinating tool for improving the human health. Scientists, the food industry, consumers and policymakers alike are involved in this new field of nutrition. Defining how knowledge about the HGM is being translated into public perception has never been addressed before. This raises the question of metaphors associated with the HGM, and how they could be used to improve public understanding, and to influence individual decision-making on healthcare policy. This article suggests that a meeting of stakeholders from the social sciences, basic research and the food industry, taking an epistemological approach to the HGM, is needed to foster close, innovative partnerships that will help shape public perception and enable novel behavioural interventions that would benefit public health. (shrink)
Based on a careful study of his unpublished manuscripts as well as his published work, this book explores Peirce's general theory of signs and the way in which Peirce himself used this theory to understand subjectivity.
Although research into fair and alternative trade networks has increased significantly in recent years, very little synthesis of the literature has occurred thus far, especially for social considerations such as gender, health, labor, and equity. We draw on insights from critical theorists to reflect on the current state of fair and alternative trade, draw out contradictions from within the existing research, and suggest actions to help the emancipatory potential of the movement. Using a systematic scoping review methodology, this paper reviews (...) 129 articles and reports that discuss the social dimensions of fair and alternative trade experienced by Southern agricultural producers and workers. The results highlight gender, health, and labor dimensions of fair and alternative trade systems and suggest that diverse groups of producers and workers may be experiencing related inequities. By bringing together issues that are often only tangentially discussed in individual studies, the review gives rise to a picture that suggests that research on these issues is both needed and emerging. We end with a summary of key findings and considerations for future research and action. (shrink)
Investigating Girard's new propositionnal calculus which aims at a large scale study of computation, we stumble quickly on that question: What is a multiplicative connective? We give here a detailed answer together with our motivations and expectations.
The Canadian-American biologist Edmund Vincent Cowdry played an important role in the birth and development of the science of aging, gerontology. In particular, he contributed to the growth of gerontology as a multidisciplinary scientific field in the United States during the 1930s and 1940s. With the support of the Josiah Macy, Jr. Foundation, he organized the first scientific conference on aging at Woods Hole, Massachusetts, where scientists from various fields gathered to discuss aging as a scientific research topic. He (...) also edited Problems of Ageing (1939), the first handbook on the current state of aging research, to which specialists from diverse disciplines contributed. The authors of this book eventually formed the Gerontological Society in 1945 as a multidisciplinary scientific organization, and some of its members, under Cowdry's leadership, formed the International Association of Gerontology in 1950. This article historically traces this development by focusing on Cowdry's ideas and activities. I argue that the social and economic turmoil during the Great Depression along with Cowdry's training and experience as a biologist – cytologist in particular – and as a textbook editor became an important basis of his efforts to construct gerontology in this direction. (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or (...) considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
This paper is concerned with the conception of the individual in Hegelian thought. The discussion will focus on some of the textual uses that Hegel and some Hegelians make of the term individual. The ultimate aim of the paper, however, is to focus on the concrete individual and to argue that there are two fundamentally important yet distinct uses to which Hegel and some Hegelians put the term. These two uses are not compatible, dialectically or otherwise. The plan of this (...) paper is to state the nature of the problem of the individual and then to examine it in more detail through the writings of representative British Hegelians. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 (...) - - - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
A resurgence of interest in virtue ethics has engendered new insight into the fundamental link between selfhood and morality. In contradistinction to the currently ascendant justice-reasoning research paradigm, it appears that a virtue ethics approach to moral psychology provides a theoretical framework which is amenable to the empirical investigation of the nature and formation of the moral self. Six primary features of virtue ethics are delineated with a unifying emphasis throughout on the inextricable link between virtue and moral selfhood. Questions (...) and issues concerning the possibility of a psychology of virtue ethics are directly addressed throughout. (shrink)
: While the work of such expositors as Max H. Fisch, James J. Liszka, Lucia Santaella, Anne Friedman, and Mats Bergman has helped bring into sharp focus why Peirce took the third branch of semiotic (speculative rhetoric) to be "the highest and most living branch of logic," more needs to be done to show the extent to which the least developed branch of his theory of signs is, at once, its potentially most fruitful and important. The author of this paper (...) thus begins to trace out even more fully than these scholars have done the unfinished trajectory of Peirce's eventual realization of the importance of speculative rhetoric. In doing so, he is arguing for a shift from the formalist and taxonomic emphasis of so many commentators to a more thoroughly pragmaticist and "rhetorical" approach to interpreting Peirce's theory of signs. (shrink)
At first glance twentieth-century philosophy of science seems virtually to ignore chemistry. However this paper argues that a focus on chemistry helped shape the French philosophical reflections about the aims and foundations of scientific methods. Despite patent philosophical disagreements between Duhem, Meyerson, Metzger and Bachelard it is possible to identify the continuity of a tradition that is rooted in their common interest for chemistry. Two distinctive features of the French tradition originated in the attention to what was going on in (...) chemistry.French philosophers of science, in stark contrast with analytic philosophers, considered history of science as the necessary basis for understanding how the human intellect or the scientific spirit tries to grasp the world. This constant reference to historical data was prompted by a fierce controversy about the chemical revolution, which brought the issue of the nature of scientific changes centre stage.A second striking—albeit largely unnoticed—feature of the French tradition is that matter theories are a favourite subject with which to characterize the ways of science. Duhem, Meyerson, Metzger and Bachelard developed most of their views about the methods and aims of science through a discussion of matter theories. Just as the concern with history was prompted by a controversy between chemists, the focus on matter was triggered by a scientific controversy about atomism in the late nineteenth-century.Keywords: France; Epistemology; Chemistry; Revolution; Atomism; Realism. (shrink)
This book provides a valuable look at the work of up and coming epistemologists. The topics covered range from the central issues of mainstream epistemology to the more formal issues in epistemic logic and confirmation theory. This book should be read by anyone interested in seeing where epistemology is currently focused and where it is heading. - Stewart Cohen , Arizona State University..