The authors discuss some of the conceptual issues that must be considered in using and understanding psychiatric classification. DSM-IV is a practical and common sense nosology of psychiatric disorders that is intended to improve communication in clinical practice and in research studies. DSM-IV has no philosophic pretensions but does raise many philosphical questions. This paper describes the development of DSM-IV and the way in which it addresses a number of philosophic issues: nominalism vs. realism, epistemology in science, the mind/body dichotomy, (...) the definition of mental disorders, and dimensional vs. categorical classification. Keywords: DSM-IV, Nosology, psychiatric classification CiteULike Connotea Del.icio.us What's this? (shrink)
Contains fourteen essays and an introduction addressing the main areas of scholarly interest for Richard W. Davis, Professor Emeritus, Washington University, St Louis Questions how individuals envision the public good in modern Britain and how, through religious and moral beliefs, coupled with wisdom and political savvy, they can improve the public good through the ever-changing nineteenth century political institutions Essays range from studies of local electoral politics and parliamentary reform campaign to national political party organization, high politics and the (...) role religion and empire played in the creation of national policy Examines the influence of individuals on the political process through their professional work in historical and philosophical writing, journalism and missionary work at home and abroad Provides new original research in the area of modern British political history together in Parliamentary History. (shrink)
This target article presents a plausible evolutionary scenario for the emergence of the neural preconditions for language in the hominid lineage. In pleistocene primate lineages there was a paired evolutionary expansion of frontal and parietal neocortex (through certain well-documented adaptive changes associated with manipulative behaviors) resulting, in ancestral hominids, in an incipient Broca's region and in a configurationally unique junction of the parietal, occipital, and temporal lobes of the brain (the POT). On our view, the development of the POT in (...) our ancestors resulted in the neuroanatomical substrate consistent with the ability for representations in modality-neutral association cortex and, as a result of structure-imposing interaction with Broca's area, the hierarchically structured Evidence from paleoneurology and comparative primate neuroanatomy is used to argue that Homo habilis (2.5–2 million years ago) was the first hominid to have the appropriate gross neuroanatomical configuration to support conceptual structure. We thus suggest that the neural preconditions for language are met in H. habilis. Finally, we advocate a theory of language acquisition that uses conceptual structure as input to the learning procedures, thus bridging the gap between it and language. (shrink)
This response to continuing commentary addresses brain-hand relationships in Cebus apella (as introduced in West-ergaard's commentary), the evolutionary and acquisition parallels between music and language (suggested by Lynch), and the potential behavioral linguistic consequences of the evolutionary neurobiology in Australopithecus africanus and Homo habilis (discussed by Tobias). Finally, we reiterate the importance of well informed, multidisciplinary approaches to the study of the emergence of human species-specific cognition, especially linguistic capacity.
This response clarifies the nature of reappropriation and the definition of language. It explicates the relationship between neural systems and language and between homology and evolutionary gradualism. Through a review of ape capacities in the realms of language and tool use, it distinguishes human language acquisition from nonhuman learning. Finally, it suggests the appropriate sorts of evidence on which to base further evolutionary arguments relevant to the origins of language.
H. P. Grice virtually discovered the phenomenon of implicature (to denote the implications of an utterance that are not strictly implied by its content). Gricean theory claims that conversational implicatures can be explained and predicted using general psycho-social principles. This theory has established itself as one of the orthodoxes in the philosophy of language. Wayne Davis argues controversially that Gricean theory does not work. He shows that any principle-based theory understates both the intentionality of what a speaker implicates and (...) the conventionality of what a sentence implicates. In developing his argument the author explains that the psycho-social principles actually define the social function of implicature conventions, which contribute to the satisfaction of those principles. This challenging book will be of importance to philosophers of language and linguists, especially those working in pragmatics and sociolinguistics. (shrink)
Abstract MacFarlane distinguishes “context sensitivity” from “indexicality,” and argues that “nonindexical contextualism” has significant advantages over the standard indexical form. MacFarlane’s substantive thesis is that the extension of an expression may depend on an epistemic standard variable even though its content does not. Focusing on ‘knows,’ I will argue against the possibility of extension dependence without content dependence when factors such as meaning, time, and world are held constant, and show that MacFarlane’s nonindexical contextualism provides no advantages over indexical contextualism. (...) The discussion will shed light on the definition of indexicals as well as the meaning of ‘knows,’ and highlight important constraints on the way meaning can be represented in semantics. Content Type Journal Article Pages 1-14 DOI 10.1007/s11098-011-9831-1 Authors Wayne A. Davis, Georgetown University, Washington, DC, USA Journal Philosophical Studies Online ISSN 1573-0883 Print ISSN 0031-8116. (shrink)
Michael Davis, a leading figure in the study of professional ethics, offers here both a compelling exploration of engineering ethics and a philosophical analysis of engineering as a profession. After putting engineering in historical perspective, Davis turns to the Challenger space shuttle disaster to consider the complex relationship between engineering ideals and contemporary engineering practice. Here, Davis examines how social organization and technical requirements define how engineers should (and presumably do) think. Later chapters test his analysis of (...) engineering judgement and autonomy empirically, engaging a range of social science research including a study of how engineers and managers work together in ten different companies. (shrink)
Wayne Davis presents a highly original approach to the foundations of semantics, showing how the so-called "expression" theory of meaning can handle names and other problematic cases of nondescriptive meaning. The fact that thoughts have parts ("ideas" or "concepts") is fundamental: Davis argues that like other unstructured words, names mean what they do because they are conventionally used to express atomic or basic ideas. In the process he shows that many pillars of contemporary philosophical semantics, from twin earth (...) arguments to the necessity of identity, are unfounded. (shrink)
In this compelling book, John B. Davis examines the change and development in Keynes's philosophical thinking, from his earliest work through to The General Theory, arguing that Keynes came to believe himself mistaken about a number of his early philosophical concepts. The author begins by looking at the unpublished 'Apostles' papers, written under the influence of the philosopher G. E. Moore. These display the tensions in Keynes's early philosophical views, and outline his philosophical concepts of the time, including the (...) concept of intuition. Davis then shows how Keynes's later philosophy is implicit in the economic argument of The General Theory. He argues that Keynes's philosophy had by this time changed radically, and that he had abandoned the concept of intuition for the concept of convention. The author sees this as being the central idea in The General Theory, and looks at the philosophical nature of this concept of convention in detail. (shrink)
The present paper is a rejoinder to Michael Martin’s “Reply to Davis” (Philo vol. 2, no. 1), which was a response to my “Is Belief in theResurrection Rational? A Response to Michael Martin” (ibid.), which was itself a response to Martin’s “Why the Resurrection is Initially Improbable” (Philo vol. 1, no. 1), which in turn was a critique of various of my own writings on resurrection, especially Risen Indeed: Making Sense of the Resurrection.
Ethics and the University brings together the practice of ethics in the university (academic ethics) and the teaching of practical or applied ethics in the university. The book offers an explanation of practical ethics' recent emergence as a university subject, discusses research ethics, and explores the teaching of practical ethics, including sexual ethics. Michael Davis situates the subject of ethics within the university into a wider social and historical context that will be helpful in sorting out the complex issues.
The problem of the will has long been viewed as central to Heidegger's later thought. In the first book to focus on this problem, Bret W. Davis clarifies key issues from the philosopher's later period--particularly his critique of the culmination of the history of metaphysics in the technological "will to will" and the possibility of Gelassenheit or "releasement" from this willful way of being in the world--but also shows that the question of will is at the very heart of (...) Heidegger's thinking, a pivotal issue in his path from Being and Time (1926) to "Time and Being" (1962). Moreover, the book demonstrates why popular critical interpretations of Heidegger's relation to the will are untenable, how his so-called "turn" is not a simple "turnaround" from voluntarism to passivism. Davis explains why the later Heidegger's key notions of "non-willing" and " Gelassenheit " do not imply a mere abandonment of human action; rather, they are signposts in a search for an other way of being, a "higher activity" beyond the horizon of the will. While elucidating this search, his work also provides a critical look at the ambiguities, tensions, and inconsistencies of Heidegger's project, and does so in a way that allows us to follow the inner logic of the philosopher's struggles. As meticulous as it is bold, this comprehensive reinterpretation will change the way we think about Heidegger's politics and about the thrust of his philosophy as a whole. (shrink)
Grice’s Razor is a methodological principle that many philosophers and linguists have used to help justify pragmatic explanations of linguistic phenomena over semantic explanations. A number of authors in the debate over contextualism argue that an invariant semantics together with Grice’s (1975) conversational principles can account for the contextual variability of knowledge claims. I show here that the defense of Grice’s Razor found in these “Gricean invariantists,” and its use against epistemic contextualism, display all the problems pointed out earlier in (...)Davis (1998). The everyday variation in acceptable knowledge claims is better explained in terms of implicature than indexicality, but general conversational principles shed little light on whether ‘know’ is used hyperbolically, meiotically, or loosely in a context, although this issue is crucial in deciding what if anything ‘S knows p’ implicates. I present reasons favoring an account of the representative bank case in terms of loose use, making clear how they differ from Grice’s Razor. (shrink)
'This is the most lucid and engaged account of Stuart Hall's work. Meticulously, and with an exemplary generosity, Helen Davis patiently unravels the threads of Hall's intellectual history. The result is a most useful and thoughtful book, which could prove to be indispensable for students of cultural studies' - Graeme Turner, University of Queensland Understanding Stuart Hall traces the development of one of the most influential and respected figures within cultural studies. Focusing on Stuart Hall's writings over a period (...) of nearly fifty years, this volume offers students and academics a cogent and exploratory route through complex and overlapping areas of analysis. In her critical assessment of Hall's most important contributions to academic and public debate, Davis shows the extent to which his analyses of race and ethnicity have been informed by early studies of Marxism, class and 'societies structured in dominance'. Davis offers fresh insight into the formation of one of the most prolific, charismatic and controversial intellectuals of his generation. Despite having been branded a 'cultural pessimist', Stuart Hall has long been associated with encouraging new, cutting-edge scholarship within the field. This volume concludes with a discussion of Hall's most recent political and academic interventions and his continuing commitment to innovation within the visual arts. (shrink)
Wayne A. Davis uses his theory of happiness to clarify and deepen Rand's theory of emotion. He distinguishes belief from knowledge, volitive from appetitive desire, and occurrent thinking from believing. He suggests that values in Rand's sense are things we volitively desire. Happiness is defined in terms of the sum of the products of the degree of belief and (volitive) desire functions over all thoughts. Davis then evaluates such Randian maxims as that happiness cannot be achieved (...) by the pursuit of irrational whims, and that emotions are not tools of cognition, but products of one's premises—one's philosophy. (shrink)
This paper is motivated by Davis’  theory of the individual in economics. Davis’ analysis is applied to health economics, where the individual is conceived as a utility maximiser, although capable of regarding others’ welfare through interdependent utility functions. Nonetheless, this provides a restrictive and flawed account, engendering a narrow and abstract conception of care grounded in Paretian value and Cartesian analytical frames. Instead, a richer account of the socially embedded individual is advocated, which employs collective intentionality analysis. (...) This provides a sound foundation for research into an approach to health policy that promotes health as a basic human right. (shrink)
If intense pain is “world-destroying,” as Elaine Scarry has argued, one of the ways nurses respond to that loss is by re-enacting the commonplace—both in practice and in writing—through daily, accumulating acts of care. Such care poses a critique of medicine’s emphasis on the exceptional moment and stresses forms of physical tending that are quotidian rather than heroic, ongoing rather than permanent or conclusive. I develop this view of care through the writings of nurses like Walt Whitman, Louisa May Alcott, (...) Cortney Davis and Joyce Renwick. (shrink)
In ‘Davis on Enjoyment: A Reply’, Richard Warner replies to three objections against his ‘Enjoyment’ that I raised in my ‘A Causal Theory of Enjoyment’, and concludes that one of my examples in fact demonstrates a serious deficiency of my own account. I argue that Warner’s replies to my objections are unsatisfactory, and that his objection to my account had a ready solution.
In this introduction to Part 1 of the Common Knowledge symposium, “Fuzzy Studies,” the journal's editor discusses four essays from the 1980s by Richard Rorty, in which Rorty chose to associate himself with various neopragmatists, Continental thinkers, and “left-wing Kuhnians” under the rubric of the “new fuzziness.” The term had been introduced as an insult by a philosopher of science with positivist leanings, but Rorty took it up as an “endearing” compliment, arguing that “to be less fuzzy” was also to (...) be “less genial, tolerant, open-minded, and fallibilist.” He defined the “new fuzziness” as “an attempt to blur just those distinctions between the objective and subjective and between fact and value which the critical conception of rationality has developed.” This introduction also examines W. V. Quine's essay “Speaking of Objects” (1957), which describes objects as fuzzy “half-entities”; Clifford Geertz's essay “Blurred Genres” (1980), which advises social scientists that being “taxonomically upstanding” is futile; and Lofti Zadeh's article “The Concept of a Linguistic Variable and Its Application to Approximate Reasoning” (1975), which abandons “Aristotelian, bivalent logic” in favor of a “fuzzy logic” based on Zadeh's “fuzzy set theory.” This introductory piece relates these theoretical works of the past half-century to the sorites paradox and to classical issues of vagueness raised and still unresolved in Western philosophy. Returning then to Rorty, the author questions how Rorty expected his endorsement of the “new fuzziness” to be applied, as proposed, to theology and politics. Suggesting that such applications are the natural work of historians, the author, having asked the historian Natalie Zemon Davis for comment, then quotes her response—which associates fuzzy studies, “common knowledge,” and peacemaking—at length. (shrink)
The concept of the individual and his/her motivations is a bedrock of philosophy. All strands of thought at heart contain to a particular theory of the individual. Economics, though, is guilty of taking this hugely important concept without questioning how we theorize it. This superb book remedies this oversight. The new approach put forward by Davies is to pay more attention to what moral philosophy may offer us in the study of personal identity, self consciousness and will. This crosses the (...) traditional boundaries of economics and will shed new light on the distinction between positive and normative analysis in economics. With both heterodox and orthodox economics receiving a thorough analysis from Davies, this book is at once inclusive and revealing. (shrink)
Based on his theory of animalrights, Regan concludes that humans are morallyobligated to consume a vegetarian or vegandiet. When it was pointed out to him that evena vegan diet results in the loss of manyanimals of the field, he said that while thatmay be true, we are still obligated to consumea vegetarian/vegan diet because in total itwould cause the least harm to animals (LeastHarm Principle, or LHP) as compared to currentagriculture. But is that conclusion valid? Isit possible that some other (...) agriculturalproduction alternatives may result in leastharm to animals? An examination of thisquestion shows that the LHP may actually bebetter served using food production systemsthat include both plant-based agriculture and aforage-ruminant-based agriculture as comparedto a strict plant-based (vegan) system. Perhapswe are morally obligated to consume a dietcontaining both plants and ruminant(particularly cattle) animal products. (shrink)
I argue that John Searle's (1980) influential Chinese room argument (CRA) against computationalism and strong AI survives existing objections, including Block's (1998) internalized systems reply, Fodor's (1991b) deviant causal chain reply, and Hauser's (1997) unconscious content reply. However, a new ``essentialist'' reply I construct shows that the CRA as presented by Searle is an unsound argument that relies on a question-begging appeal to intuition. My diagnosis of the CRA relies on an interpretation of computationalism as a scientific theory about the (...) essential nature of intentional content; such theories often yield non-intuitive results in non-standard cases, and so cannot be judged by such intuitions. However, I further argue that the CRA can be transformed into a potentially valid argument against computationalism simply by reinterpreting it as an indeterminacy argument that shows that computationalism cannot explain the ordinary distinction between semantic content and sheer syntactic manipulation, and thus cannot be an adequate account of content. This conclusion admittedly rests on the arguable but plausible assumption that thought content is interestingly determinate. I conclude that the viability of computationalism and strong AI depends on their addressing the indeterminacy objection, but that it is currently unclear how this objection can be successfully addressed. (shrink)
The causal theory of reasons holds that acting for a reason entails that the agents action was caused by his or her beliefs and desires. While Donald Davidson (1963) and others effectively silenced the first objections to the theory, a new round has emerged. The most important recent attack is presented by Jonathan Dancy in Practical Reality (2000) and subsequent work. This paper will defend the causal theory against Dancy and others, including Schueler (1995), Stoutland (1999, 2001), and Ginet (2002).Dancy (...) observes that our reasons are neither psychological states nor causes, and that our reasons can be both motivating and normative. I argue that these observations are fully compatible with the causal theory. According to the reductive version I develop for both cognitive and optative reasons, what it is for an action to be done for a reason is for certain beliefs and desires to cause the action in a particular way. Our reasons for action are the objects of some of those beliefs and desires. The causal process has two stages. This theory explains not only Dancys observations, but also many other facts about reasons that alternative theories leave unexplained. I argue against Schueler and others that the non-appetitive desires entailed by acting for reasons are no less distinct and independent causal factors than the beliefs entailed. I go on to rebut arguments that the relation between psychological states and actions cannot be causal because it is non-empirical, rational, normative, or non-deterministic, and that explanations in terms of psychological causes are incompatible with explanations in terms of reasons. (shrink)
One might expect functionalism to imply that personal identity is preserved through various operations on the brain, including transplantation. I argue that this is not clearly so even where the whole brain is transplanted. It is definitely not so in cases where only the cerebrum is transplanted, a conceivable kind of hemispherectomy, and even certain cases in which the brain is "gradually" replaced by an inorganic substitute. These results distinguish functionalism from other accounts taking what Eric T. Olson calls the (...) "Psychological Approach" to personal identity, enabling it to avoid some of his objections to them. (shrink)
Sydney Shoemaker has claimed that functionalism, a theory about mental states, implies a certain theory about the identity over time of persons, the entities that have mental states. He also claims that persons can survive a "Brain-State-Transfer" procedure. My examination of these claims includes description and analysis of imaginary cases, but-notably-not appeals to our "intuitions" concerning them. It turns out that Shoemaker's basic insight is correct: there is a connection between the two theories. Specifically, functionalism implies that "non-branching functional continuity" (...) is sufficient for personal identity. But there is no implication that it is necessary. And the "BST" procedure may not preserve functional continuity. I consider several possibilities. On what may be the most attractive, the survivor of this-or any similar-procedure is not identical with the original person, but related to him or her as are the survivors in a case of fission. (shrink)
David Lewis, Stewart Cohen, and Keith DeRose have proposed that sentences of the form S knows P are indexical, and therefore differ in truth value from one context to another.1 On their indexical contextualism, the truth value of S knows P is determined by whether S meets the epistemic standards of the speakers context. I will not be concerned with relational forms of contextualism, according to which the truth value of S knows P is determined by the standards of the (...) subject Ss context, regardless of the standards applying to the speaker making the knowledge claim. Relational contextualism is a form of normative relativism. Indexical contextualism is a semantic theory. When the subject is the speaker, as when S is the first person pronoun I, the two forms of contextualism coincide. But otherwise, they diverge. I critically examine the principal arguments for indexicalism, detail linguistic evidence against it, and suggest a pragmatic alternative. (shrink)
There is abundant evidence of contextual variation in the use of “S knows p.” Contextualist theories explain this variation in terms of semantic hypotheses that refer to standards of justification determined by “practical” features of either the subject’s context (Hawthorne & Stanley) or the ascriber’s context (Lewis, Cohen, & DeRose). There is extensive linguistic counterevidence to both forms. I maintain that the contextual variation of knowledge claims is better explained by common pragmatic factors. I show here that one is variable (...) strictness. “S knows p” is commonly used loosely to implicate “S is close enough to knowing p for contextually indicated purposes.” A pragmatic account may use a range of semantics, even contextualist. I use an invariant semantics on which knowledge requires complete justification. This combination meets the Moorean constraint as well as any linguistic theory should, and meets the intuition constraint much better than contextualism. There is no need for ad hoc error theories. The variation in conditions of assertability and practical rationality is better explained by variably strict constraints. It will follow that “S knows p” is used loosely to implicate that the condition for asserting “p” and using it in practical reasoning are satisfied. (shrink)
Christopher Peacocke has presented an original version of the perennial philosophical thesis that we can gain substantive metaphysical and epistemological insight from an analysis of our concepts. Peacocke's innovation is to look at how concepts are individuated by their possession conditions, which he believes can be specified in terms of conditions in which certain propositions containing those concepts are accepted. The ability to provide such insight is one of Peacocke's major arguments for his theory of concepts. I will critically examine (...) this "fruitfulness" argument by looking at one philosophical problem Peacocke uses his theory to solve and treats in depth. Peacocke (1999, 2001) defines what he calls the "Integration Challenge." The challenge is to integrate our metaphysics with our epistemology by showing that they are mutually acceptable. Peacocke's key conclusion is that the Integration Challenge can be met for "epistemically individuated concepts." A good theory of content, he believes, will close the apparent gap between an account of truth for any given subject matter and an overall account of knowledge. I shall argue that there are no epistemically individuated concepts, and shall critically analyze Peacocke's arguments for their existence. I will suggest more generally that the possession conditions of concepts and their principles of individuation shed little light on the epistemology or metaphysics of things other than concepts. My broader goal is to shed light on what concepts are by showing that they are more fundamental than the sorts of cognitive and epistemic factors a leading theory uses to define them. (shrink)
The idea that english has more than one declarative "mood" has been dismissed as superstitious by empirically-minded grammarians of english for centuries--with such spectacular unsuccess, however, that the indicative/subjunctive dichotomy stands today as a cornerstone for philosophical and logical speculation about "conditionals." let me be next into the breach. i shall urge that there is no grammatical basis for any such distinction. and as for the particular adjudications of mood logicians and philosophers actually propose, there is neither rhyme nor reason (...) to them. my bent, then, is basically destructive. but i shall also be outlining a better taxonomy. (shrink)
The first code of professional ethics must: (1)be a code of ethics; (2) apply to members of a profession; (3) apply to allmembers of that profession; and (4) apply only to members of that profession. The value of these criteria depends on how we define “code”, “ethics”, and “profession”, terms the literature on professions has defined in many ways. This paper applies one set of definitions of “code”, “ethics”, and “profession” to a part of what we now know of the (...) history of professions, there by illustrating how the choice of definition can alter substantially both our answer to the question of which came first and (more importantly) our understanding of professional codes (and the professions that adopt them). Because most who write on codes of professional ethics seem to take for granted that physicians produced the first professional code, whether the Hippocratic Oath, Percival’s Medical Ethics, the 1847 Code of Ethicsof the American Medical Association (AMA), or some other document, I focus my discussion on these codes. (shrink)
In this paper I reply to Keith Yandell's recent charge that Anselmian theists cannot also be Trinitarians. Yandell's case turns on the contention that it is impossible to individuate Trinitarian members, if they exist necessarily. Since the ranks of Anselmian Trinitarians includes the likes of Alvin Plantinga, Robert Adams, and Thomas Flint, Yandell's claim is of considerable interest and import. I argue, by contrast, that Anselmians can appeal to what Plantinga calls an essence or haecceity – a property essentially unique (...) to an object – to distinguish Trinitarian members. I go on to show that the main Yandellian objection to this individuative strategy is not successful. (shrink)