Martin Heidegger is, perhaps, the most controversial philosopher of the twentieth-century. Little has been written on him or about his work and its significance for educational thought. This unique collection by a group of international scholars reexamines Heidegger's work and its legacy for educational thought.
Available from UMI in association with The British Library. Requires signed TDF. ;Recent scholarship, though undermining the critical/pre-critical distinction in various parts of Kant's philosophy, presupposes an anachronistic division of his system into epistemology, metaphysics, and ethics. What needs to be treated independently as a distinct topic is the general question of Kant's conception of philosophy as such. ;This thesis aims to fill the gap: to write the history, not of some part of Kant's system, but of the idea of (...) the whole. The study examines the origins and development of his view of the nature of philosophy as a "theory of autonomy". It covers the formative period , the first draft of a theory of autonomy as an account of morality , the second draft as a conception of reason , and the third draft involving the duality of theoretical and moral reason . ;The thesis re-maps stretches along Kant's route to transcendental idealism by plotting several junctures which have hitherto not received the attention they deserve. The foundations of ethics play as important a role in the rise of German Idealism as later the foundations of mathematics do for analytic philosophy. Chapter I marks Kant's distinction between willing and knowing, his break with Wolff's perfectionism, and the invention of the modern self freed from hierarchy and teleology. Chapter II reveals Kant's debt to Hutcheson for the question: how is a Categorical Imperative possible? Chapter III shows how Kant appropriates Rousseau's contract model to justify morality. Chapter IV covers the generalization of this breakthrough to cognition and the formation of an autonomous transcendental subject as the ground of both the Categories and the Categorical Imperative. Chapter V presents Kant's attempt to unify theoretical and moral autonomy within his account of reason. Chapter VI concludes with transcendental idealism as a dualistic view of reason. (shrink)
Summary: What makes Hutto's account special is his commitment to the rejection of content, a point where he becomes a real radical. The book is not just another book about enactivism but it is an enactive book for everyone written by an enactivist.
In the winter of 1872, while still a young professor in Basel, having just published his first book, The Birth of Tragedy, Friedrich Nietzsche delivered a series of lectures entitled “On the Future of Our Educational Institutions.”1 These five lectures were well received at the time. It appears that Nietzsche intended to add two more lectures and to publish the whole series as a book. By the end of the year, however, the title of the series featured only as the (...) second title in his self-parodying essay, “Five Forwards to Five Unwritten Books.”2 The lectures remained unpublished during Nietzsche's lifetime…. (shrink)
This book contains essays of literary and philosophical accounts that explain who we are simply as persons, and essays that highlight who we are in light of communal ties. ACTC educators model the intellectual life for students and colleagues by showing how to read texts carefully and with sophistication.
It is natural for those with permissive attitudes toward abortion to suppose that, if they have examined all of the arguments they know against abortion and have concluded that they fail, their moral deliberations are at an end. Surprisingly, this is not the case, as I argue. This is because the mere risk that one of those arguments succeeds can generate a moral reason that counts against the act. If this is so, then liberals may be mistaken about the morality (...) of abortion. However, conservatives who claim that considerations of risk rule out abortion in general are mistaken as well. Instead, risk-based considerations generate an important but not necessarily decisive reason to avoid abortion. The more general issue that emerges is how to accommodate fallibilism about practical judgment in our decision-making. (shrink)
The principle that physicians should always act in the best interests of the present patient is widely endorsed. At the same time, and often within the same document, it is recognised that there are appropriate exceptions to this principle. Unfortunately, little, if any, guidance is provided regarding which exceptions are appropriate and how they should be handled. These circumstances might be tenable if the appropriate exceptions were rare. Yet, evaluation of the literature reveals that there are numerous exceptions, several of (...) which pervade clinical medicine. This situation leaves physicians without adequate guidance on when to allow exceptions and how to address them, increasing the chances for unfairness in practice. The present article considers the range of exceptions, illustrates how the lack of guidance poses ethical concern and describes an alternative account of physician obligations to address this concern. (shrink)
By constructing a maximal incomplete d.r.e. degree, the nondensity of the partial order of the d.r.e. degrees is established. An easy modification yields the nondensity of the n-r.e. degrees and of the ω-r.e. degrees.
Many medical ethicists accept the thesis that there is no moral difference between withholding and withdrawing life-sustaining therapy. In this paper, we offer an interesting counterexample which shows that this thesis is not always true. Withholding is distinguished from withdrawing by the simple fact that therapy must have already been initiated in order to speak coherently about withdrawal. Provided that there is a genuine need and that therapy is biomedically effective, the historical fact that therapy has been initiated entails a (...) claim to continue therapy that cannot be attributed to patients who have not yet received therapy. This intrinsic difference between withholding and withdrawing therapy is of moral importance. In many instances, patients will waive this claim. But when one considers withdrawing therapy from one patient to help another in a setting of scarce resources, this intrinsic moral difference comes into sharp focus. In an era of shrinking medical resources, this difference cannot be ignored. (shrink)
In a material theory of induction, inductive inferences are warranted by facts that prevail locally. This approach, it is urged, is preferable to formal theories of induction in which the good inductive inferences are delineated as those conforming to some universal schema. An inductive inference problem concerning indeterministic, non-probabilistic systems in physics is posed and it is argued that Bayesians cannot responsibly analyze it, thereby demonstrating that the probability calculus is not the universal logic of induction.
In this article, we develop an approach for the moral assessment of research and development networks on the basis of the reflective equilibrium approach proposed by Rawls and Daniels. The reflective equilibrium approach aims at coherence between moral judgments, principles, and background theories. We use this approach because it takes seriously the moral judgments of the actors involved in R & D, whereas it also leaves room for critical reflection about these judgments. It is shown that two norms, namely reflective (...) learning and openness and inclusiveness, which are used in the literature on policy and technological networks, contribute to achieving a justified overlapping consensus. We apply the approach to a case study about the development of an innovative sewage treatment technology and show how in this case the two norms are or could be instrumental in achieving a justified overlapping consensus on relevant moral issues. (shrink)
How does the study of society relate to the study of the people it comprises? This longstanding question is partly one of method, but mainly one of fact, of how independent the objects of these two studies, societies and people, are. It is commonly put as a question of reduction, and I shall tackle it in that form: does sociology reduce in principle to individual psychology? I follow custom in calling the claim that it does ‘individualism’ and its denial ‘holism’.
By constructing a maximal incomplete d.r.e. degree, the nondensity of the partial order of the d.r.e. degrees is established. An easy modification yields the nondensity of the n-r.e. degrees and of the ω-r.e. degrees.
In this paper I show that we have strong empirical and theoretical reasons to treat the verbs we use in our semantic theorizing—particularly ‘refers to ’, ‘applies to ’, and ‘is true of ’—as intensional transitive verbs. Stating our semantic theories with intensional vocabulary allows us to partially reconcile two competing approaches to the nature and subject-matter of semantics: the Chomskian approach, on which semantics is non-relational, internalistic, and concerns the psychology of language users, and the Lewisian approach, on which (...) semantics is fully relational, specifies truth-conditions, and has metaphysical implications. ITVs have two readings: an intensional, de dicto reading, and a relational, de re reading. A semantic theory stated with the de dicto readings of our semantic verbs captures the core insights of the Chomskian approach to semantics, in part because it allows us to assign extremely fine-grained semantic values to expressions, even when those expressions are empty. On the other hand, the de re reading yields a theory that is fully relational, and issues in truth-conditions. The resulting theories are related—and compatible—in that they are expressed by two different readings of the very same semantic vocabulary, and plausibly, the distinction between these two readings is one of scope. (shrink)
In this book, Daniel Hutto and Erik Myin promote the cause of a radically enactive, embodied approach to cognition that holds that some kinds of minds -- basic minds -- are neither best explained by processes involving the manipulation of ...
An appropriate kind of curved Hilbert space is developed in such a manner that it admits operators of $\mathcal{C}$ - and $\mathfrak{D}$ -differentiation, which are the analogues of the familiar covariant and D-differentiation available in a manifold. These tools are then employed to shed light on the space-time structure of Quantum Mechanics, from the points of view of the Feynman ‘path integral’ and of canonical quantisation. (The latter contains, as a special case, quantisation in arbitrary curvilinear coordinates when space is (...) flat.) The influence of curvature is emphasised throughout, with an illustration provided by the Aharonov-Bohm effect. (shrink)
There have in recent years been at least two important attempts to get to grips with Aristotle's conception of dialectic. I have in mind those by Martha C. Nussbaum in ‘Saving Aristotle's appearances’, which is chapter 8 of her The Fragility of Goodness , and by Terence H. Irwin in his important, though in my opinion somewhat misguided, book Aristotle's First Principles . There is a sense in which both of these writers are reacting to the work of G. E. (...) L. Owen on cognate matters, particularly his well-known paper ‘ Tithenai ta phainomena ’. Owen himself was in part reacting to what I suppose is the traditional view of how Aristotle regarded dialectic, as revealed in Topics I. 1. On that view dialectic is for Aristotle a lesser way of proceeding than is demonstration, the method of science. For demonstration proceeds from premises which are accepted as true in themselves and moves from them to conclusions which follow necessarily from those premises; and the middle term of such a demonstrative syllogism then provides the ‘reason why’ for the truth of the conclusion. Dialectic proceeds from premises which are accepted on a lesser basis ‘by everyone or by the majority or by the wise, i.e. by all, or by the majority, or by the most notable and reputable of them’ , and proceeds deductively from them to further conclusions. (shrink)
Nelson Goodman's distinction between autographic and allographic arts is appealing, we suggest, because it promises to resolve several prima facie puzzles. We consider and rebut a recent argument that alleges that digital images explode the autographic/allographic distinction. Regardless, there is another familiar problem with the distinction, especially as Goodman formulates it: it seems to entirely ignore an important sense in which all artworks are historical. We note in reply that some artworks can be considered both as historical products and as (...) formal structures. Talk about such works is ambiguous between the two conceptions. This allows us to recover Goodman's distinction: art forms that are ambiguous in this way are allographic. With that formulation settled, we argue that digital images are allographic. We conclude by considering the objection that digital photographs, unlike other digital images, would count as autographic by our criterion; we reply that this points to the vexed nature of photography rather than any problem with the distinction. (shrink)
Entities of many kinds, not just material things, have been credited with parts. Armstrong, for example, has taken propositions and properties to be parts of their conjunctions, sets to be parts of sets that include them, and geographical regions and events to be parts of regions and events that contain them. The justification for bringing all these diverse relations under a single ‘part–whole’ concept is that they share all or most of the formal features articulated in mereology. But the concept (...) has also prompted an ontological thesis that has been expressed in various ways: that wholes are ‘no ontological addition’ to their parts ; that to list both a whole and its parts is ‘double counting’; and that there is ‘no more’ to a whole than its parts: for example, that there is no more to a conjunction than the conjuncts that are its parts, and whose truth or falsity determines whether it is true or false. For brevity, I shall express the thesis in the last of these ways, as the claim that entities with parts are ‘nothing but’ those parts. (shrink)
Objectives—To assess whether UK and US health care professionals share the views of medical ethicists about medical futility, withdrawing/withholding treatment, ordinary/extraordinary interventions, and the doctrine of double effectDesign, subjects and setting–A 138-item attitudinal questionnaire completed by 469 UK nurses studying the Open University course on “Death and Dying” was compared with a similar questionnaire administered to 759 US nurses and 687 US doctors taking the Hastings Center course on “Decisions near the End of Life”.Results–Practitioners accept the relevance of concepts widely (...) disparaged by bioethicists: double effect, medical futility, and the distinctions between heroic/ordinary interventions and withholding/ withdrawing treatment. Within the UK nurses' group a “rationalist” axis of respondents who describe themselves as having “no religion” are closer to the bioethics consensus on withholding and withdrawing treatment.Conclusions—Professionals' beliefs differ substantially from the recommendations of their professional bodies and from majority opinion in bioethics. Bioethicists should be cautious about assuming that their opinions will be readily accepted by practitioners. (shrink)
Given the growing public concern and attention placed on cases of research misconduct, government agencies and research institutions have increased their efforts to develop and improve ethics education programs for scientists. The present study sought to assess the impact of these increased efforts by sampling empirical studies published since the year 2000. Studies published prior to 2000 examined in other meta-analytic work were also included to provide a baseline for assessing gains in ethics training effectiveness over time. In total, this (...) quantitative review consisted of 66 empirical studies, 106 ethics courses, 150 effect sizes, and 10,069 training participants. Overall, the findings indicated that ethics instruction resulted in sizable benefits to participants and has improved considerably within the last decade. A number of specific findings also emerged regarding moderators of instructional effectiveness. Recommendations are discussed for improving the development, delivery, and evaluation of ethics instruction in the sciences. (shrink)
It is argued that it is possible that all properties are categorical, contrary to the arguments of Franklin that there must be dispositionality "all the way down". The tasks for which dispositionality is alleged to be needed can be fulfilled by laws of nature, which are categorical relations between universals.
[D. H. Mellor] Kant's claim that our knowledge of time is transcendental in his sense, while false of time itself, is true of tenses, i.e. of the locations of events and other temporal entities in McTaggart's A series. This fact can easily, and I think only, be explained by taking time itself to be real but tenseless. /// [J. R. Lucas] Mellor's argument from Kant fails. The difficulties in his first Antinomy are due to topological confusions, not the tensed nature (...) of time. Nor are McTaggart' s difficulties due to the tensed nature of time. The ego-centricity of tensed discourse is an essential feature of communication between selves, each of whom refers himself as 'I', and is required for talking about time as well as experience and agency. Arguments based on the Special Theory are misconceived. Some rest on a confused notion of 'topological simultaneity'. In the General Theory a cosmic time is defined, as also in quantum mechanics, where a natural present is defined by a unique hyperplane of collapse into eigen-ness. (shrink)
Social scientists could learn some useful things from philosophy. Here I shall discuss what I take to be one such thing: a better understanding of the concept of utility. There are several reasons why a better understanding may be useful. First, this concept is commonly found in the writings of social scientists, especially economists. Second, utility is the main ingredient in utilitarianism, a perspective on morality that, traditionally, has been very influential among social scientists. Third, and most important, with a (...) better understanding of utility comes, as I shall try to show here, a better understanding of “personal welfare”. or, in other words, of what may be said to be in people's best interests. Such an understanding is useful to social scientists and philosophers alike, whether for utilitarian purposes or not. (shrink)
In Big Gods, Norenzayan (2013) presents the most comprehensive treatment yet of the Big Gods question. The book is a commendable attempt to synthesize the rapidly growing body of survey and experimental research on prosocial effects of religious primes together with cross-cultural data on the distribution of Big Gods. There are, however, a number of problems with the current cross-cultural evidence that weaken support for a causal link between big societies and certain types of Big Gods. Here we attempt to (...) clarify these problems and, in so doing, correct any potential misinterpretation of the cross-cultural findings, provide new insight into the processes generating the patterns observed, and flag directions for future research. (shrink)
We investigate a possible form of Schrödinger’s equation as it appears to moving observers. It is shown that, in this framework, accelerated motion requires fictitious potentials to be added to the original equation. The gauge invariance of the formulation is established. The example of accelerated Euclidean transformations is treated explicitly, which contain Galilean transformations as special cases. The relationship between an acceleration and a gravitational field is found to be compatible with the picture of the ‘Einstein elevator’. The physical effects (...) of an acceleration are illustrated by the problem of the uniformly-accelerated harmonic oscillator. (shrink)
: The introduction of transgenic organisms into agriculture has raised a firestorm of controversy. Many view the technology as a pathway to a much better future society, whereas others condemn it for endangering people and the environment. One defective argument against transgenics is the Unnatural Is Unethical argument (UIU). UIU attempts to prove if transgenic organisms are unnatural and all unnatural things are morally bad, then transgenics are morally bad. However, the argument fails once it is shown that there is (...) no plausible definition for "unnatural." Therefore, UIU should be abandoned in favor of arguments more likely to succeed. (shrink)
The December 2008 White Paper (WP) on “Brain Death” published by the President’s Council on Bioethics (PCBE) reaffirmed its support for the traditional neurological criteria for human death. It spends considerable time explaining and critiquing what it takes to be the most challenging recent argument opposing the neurological criteria formulated by D. Alan Shewmon, a leading critic of the “whole brain death” standard. The purpose of this essay is to evaluate and critique the PCBE’s argument. The essay begins with a (...) brief background on the history of the neurological criteria in the United States and on the preparation of the 2008 WP. After introducing the WP’s contents, the essay sets forth Shewmon’s challenge to the traditional neurological criteria and the PCBE’s reply to Shewmon. The essay concludes by critiquing the WP’s novel justification for reaffirming the traditional conclusion, a justification the essay finds wanting. (shrink)
The extremely successful Standard Model of Particle Physics allows one to define the so-called Elementary Particles. From another point of view, how can we think of them? What kind of a status can be attributed to Elementary Particles and their associated quantised fields? Beyond the unprecedented efficiency and reach of quantum field theories, the current paper attempts at understanding the nature of what these theories describe, the enigmatic reality of the quantum world.
We show that Smullyan's analytic tableaux cannot p-simulate the truth-tables. We identify the cause of this computational breakdown and relate it to an underlying semantic difficulty which is common to the whole tradition originating in Gentzen's sequent calculus, namely the dissonance between cut-free proofs and the Principle of Bivalence. Finally we discuss some ways in which this principle can be built into a tableau-like method without affecting its analytic nature.
Most commentators draw a sharp distinction between therapy and enhancement, applauding therapy and rejecting enhancement. Not only is this distinction unclear but enhancement is often seen in grandiose terms in which human beings are radically transformed. Such far-reaching visions are then used to reject current procedures such as pre-implantation genetic diagnosis. To overcome this highly problematic impasse, enhancement has been divided into three categories, ranging from the health-related enhancement of category 1, through the non-health-related enhancement of category 2, to the (...) transhumanism or posthumanism of category 3. Arguably, most enhancements are of the category 1 variety, and hence closely related to treatment. Also, we are already enhanced, when compared with our forebears. It is only when we accept this and dispense with baseless speculation will we be in a position to conduct ethical discussions within a realistic framework. (shrink)
Established wisdom in cognitive science holds that the everyday folk psychological abilities of humans -- our capacity to understand intentional actions performed for reasons -- are inherited from our evolutionary forebears. In _Folk Psychological Narratives_, Daniel Hutto challenges this view and argues for the sociocultural basis of this familiar ability. He makes a detailed case for the idea that the way we make sense of intentional actions essentially involves the construction of narratives about particular persons. Moreover he argues that children (...) acquire this practical skill only by being exposed to and engaging in a distinctive kind of narrative practice. Hutto calls this developmental proposal the narrative practice hypothesis. Its core claim is that direct encounters with stories about persons who act for reasons supply children with both the basic structure of folk psychology and the norm-governed possibilities for wielding it in practice. In making a strong case for the as yet underexamined idea that our understanding of reasons may be socioculturally grounded, Hutto not only advances and explicates the claims of the NPH, but he also challenges certain widely held assumptions. In this way, _Folk Psychological Narratives_ both clears conceptual space around the dominant approaches for an alternative and offers a groundbreaking proposal. (shrink)
Gottfried Wilhelm Leibniz is the self-proclaimed inventor of the binary system and is considered as such by most historians of mathematics and/or mathematicians. Really though, we owe the groundwork of today’s computing not to Leibniz but to the Englishman Thomas Harriot and the Spaniard Juan Caramuel de Lobkowitz, whom Leibniz plagiarized. This plagiarism has been identified on the basis of several facts: Caramuel’s work on the binary system is earlier than Leibniz’s, Leibniz was acquainted—both directly and indirectly—with Caramuel’s work and (...) Leibniz had a natural tendency to plagiarize scientific works. (shrink)
According to grounded cognition, words whose semantics contain sensory-motor features activate sensory-motor simulations, which, in turn, interact with spatial responses to produce grounded congruency effects. Growing evidence shows these congruency effects do not always occur, suggesting instead that the grounded features in a word's meaning do not become active automatically across contexts. Researchers sometimes use this as evidence that concepts are not grounded, further concluding that grounded information is peripheral to the amodal cores of concepts. We first review broad evidence (...) that words do not have conceptual cores, and that even the most salient features in a word's meaning are not activated automatically. Then, in three experiments, we provide further evidence that grounded congruency effects rely dynamically on context, with the central grounded features in a concept becoming active only when the current context makes them salient. Even when grounded features are central to a word's meaning, their activation depends on task conditions. (shrink)