The central claim of this paper is that the culture of entitlement in education is incoherent to the extent to which it rejects: concepts of educational achievement. It gives an account of some of the conceptual features of achievement and educational achievement, and argues that although educational and academic achievement are closely linked with each other they are distinct. It tries to show why academic practices are central in our conceptions of the value of educational achievement. In terms of the (...) concept of epistemological access it argues that the agency of the learner is necessary to educational access, and, hence, educational achievement, but that the culture of entitlement in education has a strong tendency to deny this. The paper tries to show in what ways the culture of entitlement presupposes the concept of educational achievement. (shrink)
Plato's Cretan City is a thorough investigation into the roots of Plato's Laws and a compelling explication of his ideas on legislation and social institutions. A dialogue among three travelers, the Laws proposes a detailed plan for administering a new colony on the island of Crete. In examining this dialogue, Glenn Morrow describes the contemporary Greek institutions in Athens, Crete, and Sparta on which Plato based his model city, and explores the philosopher's proposed regulations concerning property, the family, government, (...) and the administration of justice, education, and religion. He approaches the Laws as both a living document of reform and a philosophical inquiry into humankind's highest earthly duty. (shrink)
Recent work in various branches of philosophy has reinvigorated debate over the psychology behind moral judgment. Using Marc Hauser's categorization of theories as “Kantian,” “Humean,” or “Rawlsian” to frame the discussion, I argue that the existing evidence weighs against the Kantian model and partly in favor of both the Humean and the Rawlsian models. Emotions do play a causal role in the formation of our moral judgments, as the Humean model claims, but there are also unconscious principles shaping our moral (...) judgments, as the Rawlsian model predicts. Thus, Hauser's tripartite division of possible models of moral psychology is inadequate. Drawing on research in cognitive neuroscience, clinical and behavioral psychology, and psychopathology, I sketch a new, developmental sentimentalist model of moral psychology. I call it a “Mencian” model, after the Confucian philosopher Mencius. On this model, moral judgments are caused by emotions, but because of the way emotions are mapped onto particular actions, moral judgments unconsciously reflect certain principled distinctions. (shrink)
Traditional representations of philosophy have tended to prize the role of reason in the discipline. These accounts focus exclusively on ideas and arguments as animating forces in the field. But anecdotal evidence and more rigorous sociological studies suggest there is more going on in philosophy. In this article, we present two hypotheses about social factors in the field: that social factors influence the development of philosophy, and that status and reputation—and thus social influence—will tend to be awarded to philosophers who (...) offer rationally compelling arguments for their views. In order to test these hypotheses, we need a more comprehensive grasp on the field than traditional representations afford. In particular, we need more substantial data about various social connections between philosophers. This investigation belongs to a naturalized metaphilosophy, an empirical study of the discipline itself, and it offers prospects for a fuller and more reliable understanding of philosophy. (shrink)
Climate engineering (CE), the intentional modification of the climate in order to reduce the effects of increasing greenhouse gas concentrations, is sometimes touted as a potential response to climate change. Increasing interest in the topic has led to proposals for empirical tests of hypothesized CE techniques, which raise serious ethical concerns. We propose three ethical guidelines for CE researchers, derived from the ethics literature on research with human and animal subjects, applicable in the event that CE research progresses beyond computer (...) modeling. The Principle of Respect requires that the scientific community secure the global public's consent, voiced through their governmental representatives, before beginning any empirical research. The Principle of Beneficence and Justice requires that researchers strive for a favorable risk–benefit ratio and a fair distribution of risks and anticipated benefits, all while protecting the basic rights of affected individuals. Finally, the Minimization Principle requires that researchers minimize the extent and intensity of each experiment by ensuring that no experiments last longer, cover a greater geographical extent, or have a greater impact on the climate, ecosystem, or human welfare than is necessary to test the specific hypotheses in question. Field experiments that might affect humans or ecosystems in significant ways should not proceed until a full discussion of the ethics of CE research occurs and appropriate institutions for regulating such experiments are established. (shrink)
The recent translation into English of Jean-Luc Marion’s essay “Saint Thomas Aquinas and Onto-Theo-Logy” provides an opportunity to re-examine the significance of Marion’s earlier criticisms of Aquinas (set forth, as is well known, in God without Being) in the light of his most current position on Aquinas. Toward this end, I discuss the role that the doctrine of analogy plays in Marion’s reassessment, and partial retraction, of the controversial indictment of Aquinas that was presented in God without Being. Marion’s claim (...) that the Thomistic conception of God as ipsum esse should be understood by “starting from the distance of God” is highlighted in order to elucidate how, for Aquinas (at least as Marion reads him), the doctrine of analogy functions phenomenologically, as do the divine names generally, to manifest the character of God as infinite goodness and excessive givenness. (shrink)
Ever since the publication of Dieu sans l’être in 1982, Jean-Luc Marion’s various (and varying) pronouncements on the status and meaning of esse in Aquinas have excited a good deal of interest and controversy among Thomists. Marion’s evolving understanding of Thomistic metaphysics in general, and of Thomistic analogy in particular, has been commended for its openness to correction even as it has been criticized for what many still regard as its residual deficiencies. All such criticisms, however, neglect to take account (...) of the phenomenological provenance of Marion’s concerns, and to this extent they risk misunderstanding them. Ironically, Marion’s phenomenological approach to Aquinas intends to safeguard precisely what his Thomist critics think he has jettisoned: namely, our ability to speak about God in a way that says something meaningful—or perhaps better, reveals something meaningful—about God to us. The apophatic language Marionuses to make this point should be taken as a reminder to his fellow Christians (and especially to those who happen to be Thomists) who rightfully desire to speak of God about the danger that is involved in doing so. If we interpret Aquinas’s use of the divine names according to the phenomenological horizon of distance and thus think the various names of God “according to truly theological determinations,” Marion suggests, we can avoid the danger of lapsing into a conceptual idolatry of univocal predication that occludes their phenomenological disclosiveness. (shrink)
Why do some groups succeed where others fail? We hypothesise that collaborative success is achieved when the relationship between the dyad's prior expertise and the complexity of the task creates a situation that affords constructive and interactive processes between group members. We call this state the zone of proximal facilitation in which the dyad's prior knowledge and experience enables them to benefit from both knowledge-based problem-solving processes (e.g., elaboration, explanation, and error correction) andcollaborative skills (e.g., creating common ground, maintaining joint (...) attention to the task). To test this hypothesis we conducted an experiment in which participants with different levels of aviation expertise, experts (flight instructors), novices (student pilots), and non-pilots, read flight problem scenarios of varying complexity and had to identify the problem and generate a solution with either another participant of the same level of expertise or alone. The non-pilots showed collaborative inhibition on problem identification in which dyads performed worse than their predicted potential for both simple and complex scenarios, whereas the novices and experts did not. On solution generation the non-pilot and novice dyads performed at their predicted potential with no collaborative inhibition on either simple or complex scenarios. In contrast, expert dyads showed collaborative gains, withdyads performing above their predicted potential, but only for the complex scenarios. On simple scenarios the expert dyads showed collaborative inhibition and performed worse than their predicted potential. We discuss the implications of these results for theories of collaborative problem solving. (shrink)
Stephen Pinker sets out over a dozen arguments in The language instinct (Morrow, New York, 1994) for his widely shared view that natural language is inadequate as a medium for thought. Thus he argues we must suppose that the primary medium of thought and inference is an innate propositional representation system, mentalese. I reply to the various arguments and so defend the view that some thought essentially involves natural language. I argue mentalese doesn't solve any of the problems Pinker (...) cites for the view that we think in natural language. So I don't think I think the way he thinks I think. (shrink)
In this paper we argue that dissociative identity disorder (DID) is best interpreted as a causal model of a (possible) post-traumatic psychological process, as a mechanical model of an abnormal psychological condition. From this perspective we examine and criticize the evidential status of DID, and we demonstrate that there is really no good reason to believe that anyone has ever suffered from DID so understood. This is so because the proponents of DID violate basic methodological principles of good causal modeling. (...) When every ounce of your concentration is fixed upon blasting a winged pig out of the sky, you do not question its species' ontological status. James Morrow, City of Truth (1990). (shrink)
Bicchieri (The grammar of society: The nature and dynamics of norms, 2006, xi) presents a formal analysis of norms that answers the questions of “when, how, and to what degree” norms affect human behavior in the play of games. The purpose of this paper is to apply a variation of the Bicchieri norms analysis to generate a model of norms-based play of the traditional deterrence game (Zagare and Kilgour, Int Stud Q 37:1–27, 1993; Morrow, Game theory for political scientists, (...) 1994), the paradigmatic model of conflict initiation in International Relations. The deterrence game is modeled here as a sequential decision problem. As such, our analysis is an adaptation of Bicchieri’s game-theoretic formalization of norms to what we will call the norms account of the game. We find that the standard account of the traditional deterrence game is a special case of the norms account of the game. We also show that the adaptation of Bicchieri’s analysis of social norms yields new and interesting claims regarding when, how, and to what degree norms operate as a constraint on risk-related behavior in the traditional deterrence game. Moreover, we discuss how the results of the model provide testable propositions of relevance to the role of norms in international interactions. (shrink)
It is with great pleasure that I remember my visit to the University of Alberta in Fall 1995, and I would like especially to thank Eric Higgs, Andrew Light, and Ray Morrow for making my visit an especially memorable one. During my visit, we participated in a series of seminars on postmodern theory, critical theory, media culture, cultural studies, and the philosophy of technology and not surprisingly these themes were the focus of the symposium of my book Media Culture, (...) which we are now committing to print. Accordingly, I shall respond to each of the three commentators, focusing on the themes which they highlighted. This will enable me to clarify my positions on media culture, the philosophy of technology, and the Internet (Higgs); social theory, media culture, and cultural studies (Morrow); and media culture, identity, and identity politics (Light). The interconnection of these issues in Media Culture and my work in general points, I would argue, for the need to develop transdisciplinary theories to confront the issues, problems, and challenges of the contemporary moment as we negotiate the troubled terrain between the modern and the postmodern. (shrink)
Demographic differences among consumer groups have become increasingly important to the development of marketing strategies. Marketers depend heavily on the sales force to implement strategies at the consumer level and, not surprisingly, different groups may view the salesperson’s role differently. Unfortunately, unethical sales practices targeted at various consumer groups, and especially at seniors, have been utilized as well. The purpose of this study is to provide initial empirical evidence of the ethical ideological make-up of four age segments outlined by Strauss (...) and Howe (1991, Generations: The History of America’s Future 1584–2069, Morrow, New York) and to examine the propensity for these groups (seniors, in particular) to respond differentially to potentially unethical sales tactics. Data were collected from 179 respondents representing the four generational age groups. MANOVA revealed that the seniors in this study were distinct with respect to ethical ideology and less accepting of unethical sales tactics. Managerial implications are discussed for sales organizations to maximize their effectiveness across consumer groups. (shrink)
Why we need certainty in religion.--Ways of reaching certainty.--The way of authority: or what others can tell us about God.--The way of intuition: or meeting God face to face.--The way of reasoning: or the test of consistency.--The way of experiment: or the practice of the presence of God.--The certainty of to-day and the hope for to-morrow.
Lawrence O'Donnell, Jr., Deadly Force: The True Story of How a Badge Can Become a License to Kill. New York: William Morrow and Company, 1983, 384 pp. Robert E. Goodin, Political Theory and Public Policy. Chicago: University of Chicago Press, 1982, ix + 286 pp.
n time for the holiday season--in an appropriate and enticing new format, and with a striking new jacket--a spectacular hardcover reissue of one of the most beloved books of our time. Since it was first published in 1955, Gift from the Sea has enlightened and offered solace to readers on subjects from love and marriage to peace and contentment.
euroscience of Rule-Guided Behavior brings together, for the first time, the experiments and theories that have created the new science of rules. Rules are central to human behavior, but until now the field of neuroscience lacked a synthetic approach to understanding them. How are rules learned, retrieved from memory, maintained in consciousness and implemented? How are they used to solve problems and select among actions and activities? How are the various levels of rules represented in the brain, ranging from simple (...) conditional ones if a traffic light turns red, then stop to rules and strategies of such sophistication that they defy description? And how do brain regions interact to produce rule-guided behavior? These are among the most fundamental questions facing neuroscience, but until recently there was relatively little progress in answering them. It was difficult to probe brain mechanisms in humans, and expert opinion held that animals lacked the capacity for such high-level behavior. However, rapid progress in neuroimaging technology has allowed investigators to explore brain mechanisms in humans, while increasingly sophisticated behavioral methods have revealed that animals can and do use high-level rules to control their behavior. The resulting explosion of information has led to a new science of rules, but it has also produced a plethora of overlapping ideas and terminology and a field sorely in need of synthesis. In this book, Silvia Bunge and Jonathan Wallis bring together the worlds leading cognitive and systems neuroscientists to explain the most recent research on rule-guided behavior. Their work covers a wide range of disciplines and methods, including neuropsychology, functional magnetic resonance imaging, neurophysiology, electroencephalography, neuropharmacology, near-infrared spectroscopy, and transcranial magnetic stimulation. This unprecedented synthesis is a must-read for anyone interested in how complex behavior is controlled and organized by the brain. (shrink)
Wallis draws on his experience in urban ghettos to show why traditional liberal and conservative options that emphasize either social justice or personal values fall short. He looks outside the traditional corridors of power to find solutions. Foreword by Garry Wills Preface by Cornel West.
This chapter argues that the standard conception of Spinoza as a fellow-travelling mechanical philosopher and proto-scientific naturalist is misleading. It argues, first, that Spinoza’s account of the proper method for the study of nature presented in the Theological-Political Treatise (TTP) points away from the one commonly associated with the mechanical philosophy. Moreover, throughout his works Spinoza’s views on the very possibility of knowledge of nature are decidedly sceptical (as specified below). Third, in the seventeenth-century debates over proper methods in the (...) sciences, Spinoza sided with those that criticized the aspirations of those (the physico-mathematicians, Galileo, Huygens, Wallis, Wren, etc) who thought the application of mathematics to nature was the way to make progress. In particular, he offers grounds for doubting their confidence in the significance of measurement as well as their piece-meal methodology (see section 2). Along the way, this chapter offers a new interpretation of common notions in the context of treating Spinoza’s account of motion (see section 3). (shrink)
In this paper I criticize the most significant recent examples of the practical knowledge analysis of knowledge-how in the philosophical literature: David Carr [1979, Mind, 88, 394–409; 1981a, American Philosophical Quarterly, 18, 53–61; 1981b, Journal of Philosophy of Education, 15(1), 87–96] and Stanley & Williamson [2001, Journal of Philosophy, 98(8), 411–444]. I stress the importance of know-how in our contemporary understanding of the mind, and offer the beginnings of a treatment of know-how capable of providing insight in to the use (...) of know-how in contemporary cognitive science. Specifically, I claim that Carr’s necessary conditions for know-how fail to capture the distinction he himself draws between ability and knowing-how. Moreover, Carr ties knowing-how to conscious intent, and to an explicit knowledge of procedural rules. I argue that both moves are mistakes, which together render Carr’s theory an inadequate account both of common ascriptions of knowledge-how and of widely accepted ascriptions of knowledge-how within explanations in cognitive science. Finally, I note that Carr’s conditions fail to capture intuitions (heshares) regarding the ascription of know-how to persons lacking ability. I then consider the position advocated by Stanley & Williamson (2001), which seems avoid Carr’s commitments to conscious intent and explicit knowledge while still maintaining that “knowledge-how is simply a species of knowledge-that" (Stanley & Williamson, 2001, p. 411). I argue that Stanley and Williamson’s attempt to frame a reductionist view that avoids consciously occurrent beliefs during exercises of knowledge-how and explicit knowledge of procedural rules is both empirically implausible and explanatorily vacuous. In criticizing these theories I challenge the presuppositions of the most pervasive response to Ryle in the philosophic literature, what might be described as “the received view." I also establish several facts about knowing-how. First, neither conscious intent nor explicit representation (much less conscious representation) of procedural rules are necessary for knowing-how given the theory of cognition current in cognitive science. I argue that the discussed analyses fail to capture the necessary conditions for knowledge-how because know-how requires the instantiation of an ability and of the capacities necessary for exploiting an ability—not conscious awareness of purpose or explicit knowledge of rules. Second, one must understand knowledge-how as task-specific, i.e., as presupposing certain underlying conditions. Conceiving of know-how as task-specific allows one to understand ascriptions of know-how in the absence of ability as counterfactual ascriptions based upon underlying competence. (shrink)
A mechanism for planning ahead would appear to be essential to any creature with more than insect level intelligence. In this paper it is shown how planning, using full means-ends analysis, can be had while avoiding the so called symbol grounding problem. The key role of knowledge representation in intelligence has been acknowledged since at least the enlightenment, but the advent of the computer has made it possible to explore the limits of alternate schemes, and to explore the nature of (...) our everyday understanding of the world around us. In particular, artificial intelligence (AI) and robotics has forced a close examination, by people other than philosophers, of what it means to say for instance that "snow is white." One interpretation of the "new AI" is that it is questioning the need for representation altogether. Brooks and others have shown how a range of intelligent behaviors can be had without representation, and this paper goes one step further showing how intending to do things can be achieved without symbolic representation. The paper gives a concrete example of a mechanism in terms of robots that play soccer. It describes a belief, desire and intention (BDI) architecture that plans in terms of activities. The result is a situated agent that plans to do things with no more ontological commitment than the reactive systems Brooks described in his seminal paper, "Intelligence without Representation.". (shrink)
The article evaluates the Domain Postulate of the Classical Model of Science and the related Aristotelian prohibition rule on kind-crossing as interpretative tools in the history of the development of mathematics into a general science of quantities. Special reference is made to Proclus’ commentary to Euclid’s first book of Elements , to the sixteenth century translations of Euclid’s work into Latin and to the works of Stevin, Wallis, Viète and Descartes. The prohibition rule on kind-crossing formulated by Aristotle in Posterior (...) analytics is used to distinguish between conceptions that share the same name but are substantively different: for example the search for a broader genus including all mathematical objects; the search for a common character of different species of mathematical objects; and the effort to treat magnitudes as numbers. (shrink)
Equally surprisingly, Descartes’s paranoid belief was shared by several contemporary mathematicians, among them Isaac Barrow, John Wallis and Edmund Halley. (Huxley 1959, pp. 354-355.) In the light of our fuller knowledge of history it is easy to smile at Descartes. It has even been argued by Netz that analysis was in fact for ancient Greek geometers a method of presenting their results (see Netz 2000). But in a deeper sense Descartes perceived something interesting in the historical record. We are looking (...) in vain in the writings of Greek mathematicians for a full explanation of what this famous method was. And I will argue for an answer to the question why this lacuna is there: Not because Greek geometers wanted to hide this method, but because they did not fully understand it. It is instructive to note the ambivalent attitude of the most rigorous mathematician of the period, Isaac Newton, to the method of analysis. He used it himself in his own mathematical work and in the expositions of that work. Yet when the mathematical push came to physical and cosmological shove, he formulated his Principia entirely in.. (shrink)
Leibniz is well known for his formulation of the infinitesimal calculus. Nevertheless, the nature and logic of his discovery are seldom questioned: does it belong more to mathematics or metaphysics, and how is it connected to his physics? This book, composed of fourteen essays, investigates the nature and foundation of the calculus, its relationship to the physics of force and principle of continuity, and its overall method and metaphysics. The Leibnizian calculus is presented in its origin and context together with (...) its main contributors: Archimedes, Cavalieri, Wallis, Hobbes, Pascal, Huygens, Bernoulli, and Nieuwentijt. Many of us know and probably have used the Leibnizian formula: to .. (shrink)
In this paper I show that proofs by contradiction were a serious problem in seventeenth century mathematics and philosophy. Their status was put into question and positive mathematical developments emerged from such reflections. I analyse how mathematics, logic, and epistemology are intertwined in the issue at hand. The mathematical part describes Cavalieri's and Guldin's mathematical programmes of providing a development of parts of geometry free of proofs by contradiction. The logical part shows how the traditional Aristotelean doctrine that perfect demonstrations (...) are causal demonstrations influenced the reflection on proofs by contradiction. The main protagonist of this part is Wallis. Finally, I analyse some epistemological developments arising from the Cartesian tradition. In particular, I look at Arnauld's programme of providing an epistemologically motivated reformulation of Geometry free of proofs by contradiction. The conclusion explains in which sense these epistemological reflections can be compared with those informing contemporary intuitionism. (shrink)