Objective To assess parental permission for a neonate's research participation using the MacArthur competence assessment tool for clinical research (MacCAT-CR), specifically testing the components of understanding, appreciation, reasoning and choice. Study Design Quantitative interviews using study-specific MacCAT-CR tools. Hypothesis Parents of critically ill newborns would produce comparable MacCAT-CR scores to healthy adult controls despite the emotional stress of an infant with critical heart disease or the urgency of surgery. Parents of infants diagnosed prenatally would have higher MacCAT-CR scores than parents (...) of infants diagnosed postnatally. There would be no difference in MacCAT-CR scores between parents with respect to gender or whether they did or did not permit research participation. Participants Parents of neonates undergoing cardiac surgery who had made decisions about research participation before their neonate's surgery. Methods The MacCAT-CR. Results 35 parents (18 mothers; 17 fathers) of 24 neonates completed 55 interviews for one or more of three studies. Total scores: magnetic resonance imaging (mean 36.6, SD 7.71), genetics (mean 38.8, SD 3.44), heart rate variability (mean 37.7, SD 3.30). Parents generally scored higher than published subject populations and were comparable to published control populations with some exceptions. Conclusions The MacCAT-CR can be used to assess parental permission for neonatal research participation. Despite the stress of a critically ill neonate requiring surgery, parents were able to understand study-specific information and make informed decisions to permit their neonate's participation. (shrink)
Nathan Hanna has recently addressed a claim central to my 2013 article ‘Must Punishment Be Intended to Cause Suffering’ and to the second chapter of my 2016 book An Expressive Theory of Punishment: namely, that punishment need not involve an intention to cause suffering. -/- Hanna defends what he calls the ‘Aim To Harm Requirement’ (AHR), which he formulates as follows. AHR: ‘an agent punishes a subject only if the agent intends to harm the subject’ (Hanna 2017 p969). I’ll (...) try to show in this note that Hanna’s latest attempts to defend AHR fail. I’ll start by setting out my own view, drawing attention to one significant, but perhaps understandable, misstatement of Hanna’s. I’ll then discuss two alleged counter-examples that Hanna presents to my view, and show that they both fail in their own terms. I’ll also argue that, given assumptions that Hanna is willing to make a scenario closely related to one that Hanna presents counts against AHR. I’ll then discuss how significant it would be if these counter-examples were successful. My view is that it wouldn’t matter much, and that anyone attracted to abolitionism should agree. I’ll conclude with a brief discussion of Hart, which may be of interest to enthusiasts and Hart scholars. (shrink)
right. Unlike incoherent positive rights , such as the “right” to education or health care, the animal right is, at bottom, a right to be left alone . It does not call for government to tax us in order to provide animals with food, shelter, and veterinary care. It only requires us to stop killing them and making them suffer. I can think of no other issue where the libertarian is arguing for a positive right—his right to make animals submit (...) to any use he sees—and the other side is arguing for a negative right! (shrink)
Spade 1988 sugges t s tha t t he r e are ac tua l l y two theo r i e s t o address t h i s ques t i o n t o , an ear l y one and a l a t e r one . 2 Most o f the presen t pape r i s a deve l o pmen t o f t h i s i dea . I sugges t (...) tha t ear l y work by Sherwood and o the r s was a s tudy o f quan t i f i e r s : the i r semant i c s and t he e f f e c t s o f con t e x t on i n f e r e n ce s t ha t can be made f r om quan t i f i e d te rms . La te r , i n the hands o f Bur l e y and o the r s , i t changed i n t o a s tudy o f someth i n g e l se , a s tudy o f what I ca l l g loba l quan t i f i c a t i o n a l e f f e c t . In sec t i o n 1 , I exp l a i n what these two op t i o n s are. (shrink)
La « philosophie chrétienne » a-t-elle encore un avenir, ou connaît-elle aujourd’hui sa fin ? Le présent article cherche à répondre de manière nuancée à cette question, après avoir évoqué la naissance du problème et le contexte culturel, rappelé l’essentiel de la fameuse querelle des années 1930, et examiné plusieurs tentatives ultérieures d’articuler la rencontre de la philosophie autonome et de l’expérience chrétienne.
À la suite d’Ebeling, René Marlé s’était intéressé à cette question centrale de Bultmann « Quel sens cela a-t-il de parler de Dieu ? », au point d’en tirer le titre de son ouvrage sur Ebeling, Parler de Dieu aujourd’hui. La question ne porte pas sur le sens du mot Dieu ni sur la connaissance de Dieu, mais sur le sens du parler de Dieu : elle se situe donc sur le terrain de l’herméneutique , celui de l’analyse du langage.« (...) Si on veut parler de Dieu, répond Bultmann, on doit nécessairement parler de soi-même ». Il veut dire qu’on ne peut parler de Dieu qu’à partir de l’expérience personnelle de l’avoir rencontré, d’avoir été rencontré par sa Parole. Il n’interdit donc pas d’introduire le nom de Dieu dans le langage ; mais pas dans le langage objectivant propre à la condition spatio-temporelle, mais dans le parler existentiel, dont Heidegger a fait la théorie, qui est celui de l’expérience humaine et interpersonnelle, en tant que Dieu donne sens à ma vie. Il est alors possible de parler de lui avec sens, dans la mesure où Dieu se fait appréhender comme le Tout-Autre par l’obéissance de la foi.Following Ebeling, René Marlé became interested in the principal question of Bultmann “What meaning does talk about God have ?”, to the extent that the title of his book on Ebeling came from there, Speaking about God Today. The question is not about the meaning of the word God, not about knowledge of God, but about the sense in talking about God : it is situated, therefore, in the field of hermeneutics , that of the analysis of language.“If you want to talk about God“, Bultmann said, “you must necessarily talk about yourself”. He means that one cannot speak of God except from personal experience of having encountered Him, of having encountered Him through his Word. He does not, then, forbid introducing the name of God in language ; but not in the objectivizing language that belongs to the space-time condition, but rather in existential speech, about which Heidegger theorized. It is that of human and interpersonal experience, in so far as God gives meaning to my life. Then it is possible to speak of Him meaningfully, to the extent that God makes Himself understood as the Wholly Other by the obedience of faith. (shrink)
Le regard théologique sur la question du mal a progressé en Occident grâce à la lecture des Noms divins de Denys, et à l'analyse scientifique de Thomas d'Aquin. Il est cependant intéressant de souligner le nouvel ordre et les rectifications que Thomas d'Aquin apporte à la pensée de Denys dont il est tributaire. Imprégné de la philosophie néoplatonicienne, Denys apparaît dans son ouvrage comme faisant une théologie de l'amour et du Bien. Le Bien y est non seulement objet d'amour, mais (...) il appelle un ordre et une vie. Le mal ne peut être situé que par rapport à l'opération vitale défaillante. Par rapport à Denys, Thomas d'Aquin opère une inversion de pensée. Son apport propre est de dépasser les deux positions de Denys selon qui le mal n'a pas de cause propre, le Bien étant fin de tous les maux. Le mal, qui reste accidentel pour Thomas, est aussi le fruit d'une relation déséquilibrée. Conséquence du péché en tant que mal de peine, tout est-il résolu par là quant au problème du mal ? Par delà saint Thomas, n'y a-t-il pas demande aujourd'hui d'un regard lucide sur la relation réciproque, c'est-à-dire le rôle inhibiteur que joue souvent l'excès de souffrance dans la relation à Dieu ? Theology's attitude to the question of evil has progressed in the West, thanks to the study of Dionysius' Divine Names and the scientific analysis of Thomas Aquinas. It is, nevertheless, interesting to underline the new method and rectifications that Thomas Aquinas brings to the thought of Dionysius, upon which he depends. Dionysius, impregnated with neo-Platonic philosophy, seems to elaborate a theology of love and the Good in his work. He sees the Good as not only the object of love, but calling for an order and a life. Evil can only be situated in terms of a vital operation that is faulted. In comparison with Dionysius, Thomas Aquinas operates an inversion of thought. His own contribution is to go beyond the two positions of Dionysius, according to whom evil does not have its own cause, the Good being the end of all evils. Evil, which remains accidental for Thomas, is also the fruit of an unbalanced relation. The consequence of sin in terms of the evil of pain does this resolve all the problems concerning evil ? Beyond St. Thomas, isn’t there a need today of a clear perspective about reciprocal relation, the inhibiting role that the excess of suffering often plays in the relation to God ? (shrink)
Résumé — Toute vérité est-elle connaissable en principe ? Une réponse négative à cette question suit d’un argument logique dû à F. Fitch, voisin du paradoxe de Moore, et connu sous le nom de paradoxe de la connaissabilité. Le paradoxe de Fitch constitue un obstacle à la conception antiréaliste de la vérité et, plus généralement, semble-t-il, à l’idéal positiviste d’après lequel toute vérité devrait nous être accessible en principe. Dans cet article, j’examine différentes tentatives pour préserver le principe selon lequel (...) toute vérité est connaissable, chacune inspirée d’une forme propre d’antiréalisme . Ces différentes approches sont comparées à une conception réaliste du positivisme, évoquée récemment par Burgess, qui postulerait seulement que toute vérité nécessaire est connaissable. Selon cette conception, certaines vérités contingentes sont effectivement inconnaissables, mais l’accessibilité de principe des vérités nécessaires demeure suffisante pour garantir la confiance positiviste dans la science.— Are all truths knowable ? A negative answer to this question follows from a logical argument related to the Moore Paradox and due to F. Fitch, also known as Fitch’s Knowability Paradox. Fitch’s paradox is widely considered to be an obstacle to the antirealist conception of truth, but even for a realist, it appears to threaten the positivist faith in the accessibility of all truths to our minds. In this paper, I first review different strategies to circumvent the paradox, each of them inspired by a different form of antirealism . I then compare these approaches to a realist version of positivism, discussed recently by Burgess, which would only postulate that all necessary truths are knowable. On that view, some contingent truths are indeed unknowable, but the idea that all necessary truths are within our reach remains sufficient to maintain the positivist faith in scientific knowledge. (shrink)
Is there an energeia of the Good according to Plotinus? The aim of this paper is to shed light on the tension between two conflicting perspectives concerning the Good in the philosophy of Plotinus. According to the first perspective, Plotinus claims that the the first principle completely transcends the energeia, which is strictly limited to the Intellect. According to the second, he ascribes a kind of immanent energeia to the One. I will examine the two series of texts in which (...) these two perspectives are present and advance two hypotheses to explain the divergence between these two viewpoints.Firstly, the meaning of the term energeia is not unequivocal, depending on whether it is strictly limited to the intelligible realm or ascribed to the One. Secondly, the competition between two models of causality in the Enneads can explain why Plotinus has two divergent views on the relation between the Good and the energeia. According to the first model, the Good ≪doesn’t have in itself what it gives≫. In line with this principle, Plotinus claims that the Good stands epekeina energeias. The second model of causality is inherited from the peripatetic school. According to it, the cause already contains eminently in itself that which it gives. This model of causality helps explain the ascription of energeia to the Good and the so‑called ≪double‑energeia theory≫. (shrink)
As children in elementary school we were taught to recite the alphabet in order: “Aay, Bee, See, Dee, Eii, Eff, Ghee, Aaych, …, Why and Zee”. There is nothing natural about this particular ordering: it is strictly a matter of convention. (When and where it was settled upon I haven’t the remotest notion.) Then, having mastered the ordering, we were taught to apply that knowledge to alphabetize lists of words. The procedure is surprisingly complex, and its mastery by mere eight-year (...) olds attests to the elevated intellectual capacities of human beings. One merely has to try to write down the procedure in a flow chart to see how complex it truly is. In any event, most of us probably emerged from the exercise of learning to use a dictionary believing that we knew all there is to know about alphabetizing. If only that were all there is to it. The trouble is that our third-grade teachers had not reckoned on our having to program computers to alphabetize.. (shrink)
This article gives a brief introduction to the MacArthur Competence Assessment Tool-Treatment (MacCAT-T) and critically examines its theoretical presuppositions. On the basis of empirical, methodological and ethical critique it is emphasised that the cognitive bias that underlies the MacCAT-T assessment needs to be modified. On the one hand it has to be admitted that the operationalisation of competence in terms of value-free categories, e.g. rational decision abilities, guarantees objectivity to a great extent; but on the other hand it bears severe (...) problems. Firstly, the cognitive focus is in itself a normative convention in the process of anthropological value-attribution. Secondly, it misses the complexity of the decision process in real life. It is therefore suggested that values, emotions and other biographic and context specific aspects should be considered when interpreting the cognitive standards according to the MacArthur model. To fill the gap between cognitive and non-cognitive approaches the phenomenological theory of personal constructs is briefly introduced. In conclusion some main demands for further research to develop a multi-step model of competence assessment are outlined. (shrink)
Three-dimensional material models of molecules were used throughout the 19th century, either functioning as a mere representation or opening new epistemic horizons. In this paper, two case studies are examined: the 1875 models of van ‘t Hoff and the 1890 models of Sachse. What is unique in these two case studies is that both models were not only folded, but were also conceptualized mathematically. When viewed in light of the chemical research of that period not only were both of these (...) aspects, considered in their singularity, exceptional, but also taken together may be thought of as a subversion of the way molecules were chemically investigated in the 19th century. Concentrating on this unique shared characteristic in the models of van ‘t Hoff and the models of Sachse, this paper deals with the shifts and displacements between their operational methods and existence: between their technical and epistemological aspects and the fact that they were folded, which was forgotten or simply ignored in the subsequent development of chemistry. (shrink)
The problem of free will is deeply linked with the causal relevance of mental events. The causal exclusion argument claims that, in order to be causally relevant, mental events must be identical to physical events. However, Gibb has recently criticized it, suggesting that mental events are causally relevant as double preventers. For Gibb, mental events enable physical effects to take place by preventing other mental events from preventing a behaviour to take place. The role of mental double preventers is hence (...) similar to what Libet names free won’t, namely the ability to veto an action initiated unconsciously by the brain. In this paper I will propose an argument against Gibb’s account, the causal irrelevance argument, showing that Gibb’s proposal does not overcome the objection of systematic overdetermination of causal relevance, because mental double preventers systematically overdetermine physical double preventers, and therefore mental events are causally irrelevant. (shrink)
The implementation of Responsible Research and Innovation is not without its challenges, and one of these is raised when societal desirability is included amongst the RRI principles. We will argue that societal desirability is problematic even though it appears to fit well with the overall ideal. This discord occurs partly because the idea of societal desirability is inherently ambiguous, but more importantly because its scope is unclear. This paper asks: is societal desirability in the spirit of RRI? On von Schomberg’s (...) account, it seems clear that it is, but societal desirability can easily clash with what is ethically permissible; for example, when what is desirable in a particular society is bad for the global community. If that society chose not to do what was desirable for it, the world would be better off than if they did it. Yet our concern here is with a more complex situation, where there is a clash with ethical acceptability, but where the world would not be better off if the society chose not do what was societally desirable for itself. This is the situation where it is argued that someone else will do it if we do not. The first section of the paper gives an outline of what we take technology to be, and the second is a discussion of which criteria should be the basis for choosing research and innovation projects. This will draw on the account of technology outlined in the first section. This will be followed by an examination of a common argument, “If we don’t do it, others will”. This argument is important because it appears to justify acting in morally dubious ways. Finally, it will be argued that societal desirability gives support to the “If we don’t…” argument and that this raises some difficulties for RRI. (shrink)
Se presenta el argumento de W. T. Stace sobre el realismo señalando no que éste sea falso sino solamente que no hay absolutamente ninguna razón para considerar que sea verdadero y por tanto no tenemos por qué creerlo. Esto se aplica a la discusión de la pregunta: ¿Cómo sabemos que los átomos existen? Haciendo referencia a algunas de las respuestas científicas más importantes conocidas que son en orden cronológico: i) La ley de las proporciones definidas o Ley de Proust, ii) (...) la teoría cinética de los gases, iii) el movimiento Browniano y, iv) imágenes de microscopio de efecto túnel. (shrink)
T. H. Morgan (1866–1945), the founder of the Drosophila research group in genetics that established the chromosome theory of Mendelian inheritance, has been described as a radical empiricist in the historical literature. His empiricism, furthermore, is supposed to have prejudiced him against certain scientific conclusions. This paper aims to show two things: first, that the sense in which the term empiricism has been used by scholars is too weak to be illuminating. It is necessary to distinguish between empiricism as an (...) epistemological position and the so-called methodological empiricism. I will argue that the way the latter has been presented cannot distinguish an empiricist methodology from a non-empiricist one. Second, I will show that T. H. Morgan was not an epistemological empiricist as this term is usually defined in philosophy. The reason is that he believed in the existence of genes as material entities when they were unobservable entities when they were unobservable entities introduced to account for the phenotypic ratios found in breeding experiments. These two points, of course, are interrelated. If we were to water down the meaning of empiricis, perhaps we could call Morgan an empiricist. But then we would also fail to distinguish empiricism from realism. (shrink)
O presente artigo aborda a questão da tolerância religiosa no Iluminismo alemão, por meio da análise e interpretação de trechos selecionados da peça Nathan der Weise (1779), de Lessing. Pretende-se mostrar que essa obra tem sua origem intimamente ligada ao debate teológico (“Fragmentenstreit”) entre Lessing e o pastor Johann Melchior Goeze, de Hamburgo, podendo ser lida como uma reação e uma resposta às críticas e objeções deste último.
This thesis explores, thematically and chronologically, the substantial concordance between the work of Martin Heidegger and T.S. Eliot. The introduction traces Eliot's ideas of the 'objective correlative' and 'situatedness' to a familiarity with German Idealism. Heidegger shared this familiarity, suggesting a reason for the similarity of their thought. Chapter one explores the 'authenticity' developed in Being and Time, as well as associated themes like temporality, the 'they' (Das Man), inauthenticity, idle talk and angst, and applies them to interpreting Eliot's poem, (...) 'The Love Song of J. Alfred Prufrock'. Both texts depict a bleak Modernist view of the early twentieth-century Western human condition, characterized as a dispiriting nihilism and homelessness. Chapter two traces the chronological development of Ereignis in Heidegger's thinking, showing the term's two discernible but related meanings: first our nature as the 'site of the open' where Being can manifest, and second individual 'Events' of 'appropriation and revelation'. The world is always happening as 'event', but only through our appropriation by the Ereignis event can we become aware of this. Heidegger finds poetry, the essential example of language as the 'house of Being', to be the purest manifestation of Ereignis, taking as his examples Hölderlin and Rilke. A detailed analysis of Eliot's late work Four Quartets reveals how Ereignis, both as an ineluctable and an epiphanic condition of human existence, is central to his poetry, confirming, in Heidegger's words, 'what poets are for in a destitute time', namely to re-found and restore the wonder of the world and existence itself. This restoration results from what Eliot calls 'raid[s] on the inarticulate', the poet's continual striving to enact that openness to Being through which human language and the human world continually come to be. The final chapter shows how both Eliot and Heidegger value a genuine relationship with place as enabling human flourishing. Both distrust technological materialism, which destroys our sense of the world as dwelling place, and both are essentially committed to a genuinely authentic life, not the angstful authenticity of Being and Time, but a richer belonging which affirms our relationship with the earth, each other and our gods. (shrink)
In “Why We Need Friendly AI”, Luke Muehlhauser and Nick Bostrom propose that for our species to survive the impending rise of superintelligent AIs, we need to ensure that they would be human-friendly. This discussion note offers a more natural but bleaker outlook: that in the end, if these AIs do arise, they won’t be that friendly.
Ted T. Aoki, the most prominent curriculum scholar of his generation in Canada, has influenced numerous scholars around the world. Curriculum in a New Key brings together his work, over a 30-year span, gathered here under the themes of reconceptualizing curriculum; language, culture, and curriculum; and narrative. Aoki's oeuvre is utterly unique--a complex interdisciplinary configuration of phenomenology, post-structuralism, and multiculturalism that is both theoretically and pedagogically sophisticated and speaks directly to teachers, practicing and prospective. Curriculum in a New Key: The (...) Collected Works of Ted T. Aoki is an invaluable resource for graduate students, professors, and researchers in curriculum studies, and for students, faculty, and scholars of education generally. (shrink)
Do we live in a computer simulation? I will present an argument that the results of a certain experiment constitute empirical evidence that we do not live in, at least, one type of simulation. The type of simulation ruled out is very specific. Perhaps that is the price one must pay to make any kind of Popperian progress.
Libertarianism needs a theory of class. This claim may meet with resistance among some libertarians. A few will say: “The analysis of society in terms of classes and class struggles is a specifically Marxist approach, resting on assumptions that libertarians reject. Why should we care about class?” A greater number will say: “We recognize that class theory is important, but libertarianism doesn't need such a theory, because it already has a perfectly good one.”.
I present here a modal extension of T called KTLM which is, by several measures, the simplest modal extension of T yet presented. Its axiom uses only one sentence letter and has a modal depth of 2. Furthermore, KTLM can be realized as the logical union of two logics KM and KTL which each have the finite model property (f.m.p.), and so themselves are complete. Each of these two component logics has independent interest as well.
The CRISPR system for gene editing can break, repair, and replace targeted sections of DNA. Although CRISPR gene editing has important therapeutic potential, it raises several ethical concerns. Some bioethicists worry CRISPR is a prelude to a dystopian future, while others maintain it should not be feared because it is analogous to past biotechnologies. In the scientific literature, CRISPR is often discussed as a revolutionary technology. In this paper we unpack the framing of CRISPR as a revolutionary technology and contrast (...) it with framing it as a value-threatening biotechnology or business-as-usual. By drawing on a comparison between CRISPR and the Ford Model T, we argue CRISPR is revolutionary as a product, process, and as a force for social change. This characterization of CRISPR offers important conceptual clarity to the existing debates surrounding CRISPR. In particular, conceptualizing CRISPR as a revolutionary technology structures regulatory goals with respect to this new technology. Revolutionary technologies have characteristic patterns of implementation, entrenchment, and social impact. As such, early identification of technologies as revolutionary may help construct more nuanced and effective ethical frameworks for public policy. (shrink)
The publication in 1957 of the Wolfenden Report occasioned a celebrated controversy in which profound theoretical issues concerning the relation between law and morality, and the legal enforcement of morality were discussed. The principal disputants were Lord Justice Devlin and Professor H. L. A. Hart. It is by now well known that the main recommendation of the Wolfenden Report was the reform of the criminal law so that homosexual behaviour in private between consenting male adults should no longer be a (...) criminal offence. As homosexual behaviour in Christendom was at the outset punishable in the ecclesiastical courts, and subsequently, with the demise of the ecclesiastical courts, in the secular courts, the Wolfenden recommendation on homosexuality marked a major departure from the prevailing state of affairs in which the precepts of Christian morality, especially relating to sexual morals, were at first enforced by the ecclesiastical courts, and then by the secular courts. (shrink)