With the goal of understanding how Christopher Southgate communicates his in-depth knowledge of both science and theology, we investigated the many roles he assumes as a teacher. We settled upon wide-ranging topics that all intertwine: (1) his roles as author and coordinating editor of a premier textbook on science and theology, now in its third edition; (2) his oral presentations worldwide, including plenaries, workshops, and short courses; and (3) the team teaching approach itself, which is often needed by others because (...) the knowledge of science and theology do not always reside in the same person. Southgate provides, whenever possible, teaching contexts that involve students in experiential learning, where they actively participate with other students.We conclude that Southgate’s ultimate goal is to teach students how to reconcile science and theology in their values and beliefs, so that they can take advantage of both forms of rational thinking in their own personal and professional lives. The co-authors consider several examples of models that have been successfully used by people in various fields to integrate science and religion. (shrink)
Ad hominem arguments are generally dismissed on the grounds that they are not attempts to engage in rational discourse, but are rather aimed at undermining argument by diverting attention from claims made to assessments of character of persons making claims. The manner of this dismissal however is based upon an unlikely paradigm of rationality: it is based upon the presumption that our intellectual capacities are not as limited as in fact they are, and do not vary as much as they (...) do between rational people. When we understand rationality in terms of intellectual virtues, however, which recognize these limitations and provide for the complexity of our thinking, ad hominem considerations can sometimes be relevant to assessing arguments. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism in order (...) to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
Christopher Peacocke’s A Study of Concepts is a dense and rewarding work. Each chapter raises many issues for discussion. I know three different people who are writing reviews of the volume. It testifies to the depth of Peacocke’s book that each reviewer is focusing on a quite different set of topics.
The Neo-Aristotelian ethical naturalism of Philippa Foot and Rosalind Hursthouse purports to establish a naturalistic criterion for the virtues. Specifically, by developing a parallel between the natural ends of nonhuman animals and the natural ends of human beings, they argue that character traits are justified as virtues by the extent to which they promote and do not inhibit natural ends such as self-preservation, reproduction, and the well-being of one’s social group. I argue that the approach of Foot and Hursthouse cannot (...) provide a basis for moral universalism, the widely-accepted idea that each human being has moral worth and thus deserves significant moral consideration. Foot and Hursthouse both depict a virtuous agent as implicitly acting in accord with moral universalism. However, with respect to charity, a virtue they both emphasize, their naturalistic criterion at best provides a warrant for a restricted form of charity that extends only to a limited number of persons. There is nothing in the natural ends of human beings, as Foot and Hursthouse understand these, that gives us a reason for having any concern for the well-being of human beings as such. (shrink)
In this interview, Christopher Norris discusses a wide range of issues having to do with postmodernism, deconstruction and other controversial topics of debate within present-day philosophy and critical theory. More specifically he challenges the view of deconstruction as just another offshoot of the broader postmodernist trend in cultural studies and the social sciences. Norris puts the case for deconstruction as continuing the 'unfinished project of modernity' and—in particular—for Derrida's work as sustaining the values of enlightened critical reason in various spheres (...) of thought from epistemology to ethics, sociology and politics. Along the way he addresses a number of questions that have lately been raised with particular urgency for teachers and educationalists, among them the revival of creationist doctrine and the idea of scientific knowledge as a social, cultural, or discursive construct. In this context he addresses the 'science wars' or the debate between those who uphold t. (shrink)
One of the most noteworthy features of David Gauthier's rational choice, contractarian theory of morality is its appeal to self-interested rationality. This appeal, however, will undoubtedly be the source of much controversy and criticism. For while self-interestedness is characteristic of much human behavior, it is not characteristic of all such behavior, much less of that which is most admirable. Yet contractarian ethics appears to assume that humans are entirely self-interested. It is not usually thought a virtue of a theory that (...) its assumptions are literally false. What may be said on behalf of the contractarian? (shrink)
This remarkable book presents a rich and up-to-date view of evolution that explores the far-reaching implications of Darwin's theory and emphasizes the power, significance, and relevance of evolution to our lives today. After all, we ourselves are the product of evolution, and we can tackle many of our gravest challenges -- from lethal resurgence of antiobiotic-resistant diseases to the wave of extinctions that looms before us -- with a sound understanding of the science.
Philosophers from Hume, Kant, and Wittgenstein to the recent realists and antirealists have sought to answer the question, What are concepts? This book provides a detailed, systematic, and accessible introduction to an original philosophical theory of concepts that Christopher Peacocke has developed in recent years to explain facts about the nature of thought, including its systematic character, its relations to truth and reference, and its normative dimension. Particular concepts are also treated within the general framework: perceptual concepts, logical concepts, and (...) the concept of belief are discussed in detail. The general theory is further applied in answering the question of how the ontology of concepts can be of use in classifying mental states, and in discussing the proper relation between philosophical and psychological theories of concepts. Finally, the theory of concepts is used to motivate a nonverificationist theory of the limits of intelligible thought. Peacocke treats content as broad rather than narrow, and his account is nonreductive and non-Quinean. Yet Peacocke also argues for an interactive relationship between philosophical and psychological theories of concepts, and he plots many connections with work in cognitive psychology. (shrink)
This article is based on a roundtable held at the Association of Internet Researchers annual conference in 2017, in Tartu, Estonia. The roundtable was organized by the Oxford Internet Institute’s Digital Ethics Lab. It was entitled “Artificial Intelligence and the Good Society”. It brought together four scholars—Michael Zimmer, Stine Lomborg, Ben Zevenbergen, and Corinne Cath—to discuss the promises and perils of artificial intelligence, in particular what ethical frameworks are needed to guide AI’s rapid development and increased use in societies. (...) The paper covers three case studies. They give a distinct overview of the ethical issues raised by the use of AI at different levels of analysis: top-down application of AI, bottom-up use of AI, and how academics and governments have reacted to these new challenges. From the case studies, four areas emerged. They represent some of the most topical ethical questions related to AI: its uses, its users, its designers, and the data that fuel it. Each of them provided a specific subset of ethical concerns that need further investigation. In conclusion, three recommendations are formulated for researchers and regulators to ensure the AI has a net-positive impact on society. (shrink)
The sovereignty of the people, it is widely said, is the foundation of modern democracy. The truth of this claim depends on the plausibility of attributing sovereignty to “the people” in the first place, and I shall express skepticism about this possibility. I shall suggest as well that the notion of popular sovereignty is complex, and that appeals to the notion may be best understood as expressing several different ideas and ideals. This essay distinguishes many of these and suggests that (...) greater clarity at least would be obtained by focusing directly on these notions and ideals and eschewing that of sovereignty. My claim, however, will not merely be that the notion is multifaceted and complex. I shall argue as well that the doctrine that the people are, or ought to be, sovereign is misleading in potentially dangerous ways, and is conducive to a misunderstanding of the nature of politics, governance, and social order. It would be well to do without the doctrine, but it may be equally important to understand its errors. Our understandings and justifications of democracy, certainly, should dispense with popular sovereignty. (shrink)
The creation and consolidation of a memory can rest on the integration of any number of disparate features and contexts. How is it that these bind together to form a coherent memory? This book offers an unrivalled overview of one of the most debated hotspots of modern memory research: binding, and will instigate innovative and pioneering ideas for future research.
This paper reports the results of a questionnaire administered to 1178 undergraduate students and discusses how they responded to ten situations which asked them to assess their personal evaluation of the ethical acceptability, how society would similarly assess the situation and how business persons would respond. Multiple versions of the instrument were developed to investigate if the sex of the person involved in the situation would influence the respondents' perception of the ethical action involved. No differences were identified. Further, the (...) image of business persons as less ethical than society in general seems to have evaporated. (shrink)
I begin, as I shall end, with fictions. In a well-known tale, The Sandman , Hoffmann has a student, Nathaniel, fall in love with a beautiful doll, Olympia, whom he has spied upon as she sits at a window across the street from his lodgings. We are meant to suppose that Nathaniel mistakes an automaton for a human being . The mistake is the result of an elaborate but obscure deception on the part of the doll's designer, Professor Spalanzani. Nathaniel (...) is disabused quite by accident when he over-hears a quarrel between Spalanzani, who made Olympia's clockwork, and the sinister Coppelius, who contributed the eyes. (shrink)
Medical analogies are commonly invoked in both Indian Buddhist dharma and Hellenistic philosophy. In the Pāli Canon, nirvana is depicted as a form of health, and the Buddha is portrayed as a doctor who helps us attain it. Much later in the tradition, Śāntideva described the Buddha’s teaching as ‘the sole medicine for the ailments of the world, the mine of all success and happiness.’ Cicero expressed the view of many Hellenistic philosophers when he said that philosophy is ‘a medical (...) science for the mind.’ He thought we should ‘hand ourselves over to philosophy, and let ourselves be healed.’ ‘For as long as these ills [of the mind] remain,’ he wrote, ‘we cannot attain to happiness.’ There are many different forms of medical analogy in these two traditions, but the most general form may be stated as follows: just as medicine cures bodily diseases and brings about physical health, so Buddhist dharma or Hellenistic philosophy cures mental diseases and brings about psychological health—where psychological health is understood as the highest form of happiness or well-being. Insofar as Buddhist dharma involves philosophy, as it does, both renditions of the analogy may be said to declare that philosophy cures mental diseases and brings about psychological health. This feature of the analogy—philosophy as analogous to medical treatment—has attracted considerable attention. (shrink)
Arthur Schopenhauer's The Two Fundamental Problems of Ethics consists of two groundbreaking essays: 'On the Freedom of the Will' and 'On the Basis of Morals'. The essays make original contributions to ethics and display Schopenhauer's erudition, prose-style and flair for philosophical controversy, as well as philosophical views that contrast sharply with the positions of both Kant and Nietzsche. Written accessibly, they do not presuppose the intricate metaphysics which Schopenhauer constructs elsewhere. This is the first English translation of these works to (...) re-unite both essays in one volume. It offers a new translation by Christopher Janaway, together with an introduction, editorial notes on Schopenhauer's vocabulary and the different editions of his essays, a chronology of his life, a bibliography, and a glossary of names. (shrink)
Mathematics plays a central role in much of contemporary science, but philosophers have struggled to understand what this role is or how significant it might be for mathematics and science. In this book Christopher Pincock tackles this perennial question in a new way by asking how mathematics contributes to the success of our best scientific representations. In the first part of the book this question is posed and sharpened using a proposal for how we can determine the content of a (...) scientific representation. Several different sorts of contributions from mathematics are then articulated. Pincock argues that each contribution can be understood as broadly epistemic, so that what mathematics ultimately contributes to science is best connected with our scientific knowledge. In the second part of the book, Pincock critically evaluates alternative approaches to the role of mathematics in science. These include the potential benefits for scientific discovery and scientific explanation. A major focus of this part of the book is the indispensability argument for mathematical platonism. Using the results of part one, Pincock argues that this argument can at best support a weak form of realism about the truth-value of the statements of mathematics. The book concludes with a chapter on pure mathematics and the remaining options for making sense of its interpretation and epistemology. Thoroughly grounded in case studies drawn from scientific practice, this book aims to bring together current debates in both the philosophy of mathematics and the philosophy of science and to demonstrate the philosophical importance of applications of mathematics. (shrink)
This book revives the study of conventional implicatures in natural language semantics. H. Paul Grice first defined the concept. Since then his definition has seen much use and many redefinitions, but it has never enjoyed a stable place in linguistic theory. Christopher Potts returns to the original and uses it as a key into two presently under-studied areas of natural language: supplements and expressives. The account of both depends on a theory in which sentence meanings can be multidimensional. The theory (...) is logically and intuitively compositional, and it minimally extends a familiar kind of intensional logic, thereby providing an adaptable, highly useful tool for semantic analysis. The result is a linguistic theory that is accessible not only to linguists of all stripes, but also philosophers of language, logicians, and computer scientists who have linguistic applications in mind. (shrink)
Being Known is a response to a philosophical challenge which arises for every area of thought: to reconcile a plausible account of what is involved in the truth of statements in a given area with a credible account of how we can know those statements. Christopher Peacocke presents a framework for addressing the challenge, a framework which links both the theory of knowledge and the theory of truth with the theory of concept-possession.
Luminance and color are strong and self-sufficient cues to pictorial depth in visual scenes and images. The present study investigates the conditions Under which luminance or color either strengthens or overrides geometric depth cues. We investigated how luminance contrasts associated with color contrast interact with relative height in the visual field, partial occlusion, and interposition in determining the probability that a given figure is perceived as ‘‘nearer’’ than another. Latencies of ‘‘near’’ responses were analyzed to test for effects of attentional (...) selection. Figures in a pair were supported by luminance contrast or isoluminant color contrast and combined with one of the three geometric cues. The results of Experiment 1 show that luminance contrasts associated with hue, when it does not interact with other hues, produces the same effects as achromatic luminance contrasts: The probability of‘‘near’’ increases with luminance contrast while the latencies for ‘‘near’’ responses decrease. Partial occlusion is found to be a strong enough pictorial cue to support a weaker red luminance contrast. Interposition cues lose out against cues of spatial position and partial occlusion. The results of Experiment 2, with isoluminant displays of varying color contrast, reveal that red color contrast on a light background supported by any of the three geometric cues wins over green or white supported by any of the three geometric cues. On a dark background, red color contrast supported by the interposition cue loses out against green or white color contrast supported by partial occlusion. These findings reveal that color is not an independent depth cue, but is strongly influenced by luminance contrast and stimulus geometry. Systematically shorter response latencies for stronger ‘‘near’’ percepts demonstrate that selective visual attention reliably detects the most likely depth cue combination in a given configuration. (shrink)
We live in a morally flawed world. Our lives are complicated by what other people do, and by the harms that flow from our social, economic and political institutions. Our relations as individuals to these collective harms constitute the domain of complicity. This book examines the relationship between collective responsibility and individual guilt. It presents a rigorous philosophical account of the nature of our relations to the social groups in which we participate, and uses that account in a discussion of (...) contemporary moral theory. Christopher Kutz shows that the two prevailing theories of moral philosophy, Kantianism and consequentialism, both have difficulties resolving problems of complicity. He then argues for a richer theory of accountability in which any real understanding of collective action not only allows but demands individual responsibility. (shrink)
Plongé au cœur des nanos, Christophe Vieu souligne la diversité des secteurs touchés par l’approche nano. À l’idée d’une convergence des secteurs scientifiques, il oppose l’image d’une espèce invasive. Il se sent de ce fait investi d’une responsabilité de l’ensemble des technosciences.
Much debate has been held over the question of whether Hans-Georg Gadamer’s hermeneutic approach to ethics and the other can do justice to the alterity of the other, as exemplified in Emmanuel Levinas’s approach to ethics as first philosophy. The challenge to Gadamer and to hermeneutics more generally, comes obliquely from Levinas and more directly, from Robert Bernasconi, who argues that Gadamer cannot account for an otherness that ends in incomprehensibility as one finds in encounters between persons of asymmetrical power (...) relations—oppressed and oppressor, privileged and marginalized. Bernasconi’s critique has resulted in a flurry of hermeneutic responses that insist that Gadamer’s hermeneutics can, if understood in the right way, accommodate the other and serve as the foundation for robust ethical treatment of the other. I argue in this paper that participants in this debate have been insufficiently attentive to the ontologies that underlie the accounts of self and other in Gadamer and in Levinas. Because Gadamer and Levinas begin from different ontologies, their accounts of ethics and of the ground of ethics differ. (shrink)
Various Value-Conscious Design frameworks have recently emerged to introduce moral and ethical intelligence into business and technical design contexts, with the goal of proactively influencing the design of technologies to account for moral and ethical values during the conception and design process. Two attempts to insert ethical intelligence into technical design communities to influence the design of technologies in ethical- and value-conscious ways are described, revealing discouraging results. Learning from these failed attempts, the article identifies three key challenges of pragmatic (...) engagement with technical design communities: confronting competing values; identifying the role of the values advocate; and the justification of a value framework. Addressing these challenges must become a priority if one is to be successful in pragmatically engaging with real-world business and design contexts to bring moral and ethical intelligence to bear in the design of emerging information and communication technologies. (shrink)
Christopher G. Timpson provides the first full-length philosophical treatment of quantum information theory and the questions it raises for our understanding of the quantum world. He argues for an ontologically deflationary account of the nature of quantum information, which is grounded in a revisionary analysis of the concepts of information.
Christopher Peacocke presents a new theory of subjects of consciousness, together with a theory of the nature of first person representation. He identifies three sorts of self-consciousness--perspectival, reflective, and interpersonal--and argues that they are key to explaining features of our knowledge, social relations, and emotional lives.
I shall start by considering the apparently paradoxical doctrines that Wittgenstein put forward about knowledge: they show how the concept of knowledge is, as he says, ‘specialized’. This is not, as I shall show, a very important issue in itself, but it leads on to other points, of more interest: how it comes about, for example, that ‘not all corrections of our beliefs are on the same level’. I shall then discuss the idea that we inherit a certain picture of (...) the world that forms the background of our experiments and researches. This idea, which is not of course unique to Wittgenstein, is, however, developed with many fresh insights. I end with some discussion of Wittgenstein's reported views on religious belief, which should not, in my opinion, be regarded as part of his contribution to philosophy, the interest of them being, perhaps, more biographical than philosophical. (shrink)
Many authors, including Derek Parfit, T. M. Scanlon, and Mark Schroeder, favor a “reasons-first” ontology of normativity, which treats reasons as normatively fundamental. Others, most famously G. E. Moore, favor a “value-first” ontology, which treats value or goodness as normatively fundamental. Chapter 10 argues that both the reasons-first and value-first ontologies should be rejected because neither can account for all of the normative reasons that, intuitively, there are. It advances an ontology of normativity, originally suggested by Franz Brentano and A. (...) C. Ewing, according to which fittingness is normatively fundamental. The normative relation of fittingness is the relation in which a response stands to an object when the object merits—or is worthy of—that response. The author argues that his “fittingness-first” ontology is no less parsimonious than either the reasons- or the value-first ontology, but it can plausibly accommodate the existence of all the normative reasons there are. It therefore provides a superior ontology of normativity. (shrink)
This is a book about sensory states and their apparent characteristics. It confronts a whole series of metaphysical and epistemological questions and presents an argument for type materialism: the view that sensory states are identical with the neural states with which they are correlated. According to type materialism, sensations are only possessed by human beings and members of related biological species; silicon-based androids cannot have sensations. The author rebuts several other rival theories, and explores a number of important issues: the (...) forms and limits of introspective awareness of sensations, the semantic properties of sensory concepts, knowledge of other minds, and unity of consciousness. The book is a significant contribution to the philosophy of mind, and has much to say to psychologists and cognitive scientists. (shrink)
This paper investigates the way that linguistic expressions influence vagueness, focusing on the interpretation of the positive (unmarked) form of gradable adjectives. I begin by developing a semantic analysis of the positive form of ‘relative’ gradable adjectives, expanding on previous proposals by further motivating a semantic basis for vagueness and by precisely identifying and characterizing the division of labor between the compositional and contextual aspects of its interpretation. I then introduce a challenge to the analysis from the class of ‘absolute’ (...) gradable adjectives: adjectives that are demonstrably gradable, but which have positive forms that relate objects to maximal or minimal degrees, and do not give rise to vagueness. I argue that the truth conditional difference between relative and absolute adjectives in the positive form stems from the interaction of lexical semantic properties of gradable adjectives—the structure of the scales they use—and a general constraint on interpretive economy that requires truth conditions to be computed on the basis of conventional meaning to the extent possible, allowing for context dependent truth conditions only as a last resort. (shrink)
Page generated Wed Aug 4 17:52:28 2021 on philpapers-web-65948fd446-wp78j
cache stats: hit=2425, miss=3962, save= autohandler : 1127 ms called component : 1113 ms search.pl : 796 ms render loop : 519 ms initIterator : 274 ms next : 241 ms autosense : 235 ms addfields : 230 ms match_other : 198 ms publicCats : 140 ms quotes : 75 ms menu : 68 ms search_quotes : 38 ms match_cats : 35 ms save cache object : 31 ms retrieve cache object : 29 ms prepCit : 17 ms applytpl : 5 ms intermediate : 1 ms match_authors : 1 ms init renderer : 0 ms setup : 0 ms auth : 0 ms writelog : 0 ms