No início deste ano, contemporânea a uma edição comemorativa dos quarenta anos deexistência do lnstitutus-Rainnmdus-Lullus em Freiburg i. Br., um tanto atrasada, mas nem por isso menos importante, apareceu a tradução alemã do Liber de gentili et tribus sapientibus,. Duas circunstâncias tornaram importante o aparecimento deste livro para todos os que se interessam por Ramon Llull e pela cultura catalã. Em primeiro lugar, por ser esta não somente a primeira edição completa desta obra em alemão, que substitui a edição selecionada, (...) muito fragmentária, de Xosé e Elisabeth Scheible1 mas, ao mesmo tempo, por ser a primeira tradução completa em língua alemã desta obra.2 Além disso, o fato de este livro aparecer numa tão conhecida editora como a de Philipp Reklam indica que o Buch vom Heiden und den drei Weisen será, pelo menos nos próximos anos, a obra de Llull pela qual os leitores alemães terão o primeiro contato com o fascinante pensamento do maiorquino. Esta obra, que representa uma importante contribuição para o discurso inter-religioso na Idade Média, presta-se verdadeiramente como introdução ao pensamento do Doctor muminatus e apresenta as características essenciais da análise combinatória de sua Ars, de forma altamente explícita e igualmente estética e literária. Constituem seu guia e base narrativa os diálogos de três sábios, representando as três grandes religiões, enquanto caminham por uma selva até a descoberta dum locua amoenus, magistralmente descrito, em meio do qual encontram cinco grandes árvores, em cujas folhas se acham gravadas as combinações binárias dos princípios básicos Julianos. Aqui os três sábios, e com eles o leitor, são conduzidos pela dama Inteligência, e com ajuda das árvores, na funcionalidade da análise combinatória Juliana, antes de exporem racionalmente aos desconfiados pagãos a confortadora verdade salvífica da respectiva fé de cada um. Com isso o tradutor e editor Theodor Pindl, que já se distinguiu com diversos trabalhos, especialmente sobre o Breviculum, satisfaz na sua versão, com poucas exceções,3 as qualidades tanto filosóficas como literárias da obra e produz um texto bastante legível. No entretanto, nao baseou sua tradução na edição crítica do texto catalão feita por Antoni Bonner,4 publicada em 1993 na série Nova edició de les obres de Ramon Llull, mas recorreu à edição latina de 1721, de Ivo Salzinger, reeditada em 1965. O que é de se admirar, pois a edição feita em Mainz por Ivo Sa!zinger, como é sabido, sofreu numerosas modificações no que diz respeito à ordem das frases e à escolha das palavras feitas pelo editor. (shrink)
Avec un titre comme Luther et la philosophie, depuis le xviiie siècle et dans les milieux « libéraux » du xixe siècle, on aurait pu s’attendre à un exposé, bien sûr complet, de la philosophie du Réformateur. On trouve l’expression, par exemple, dans les tables analytiques de L’Encyclopédie, à l’entrée « luthéranisme ». Bien que Philippe Büttgen se soit donné comme objet, pour d’autres travaux, « la confessionnalisation de la philosophie ..
Philippe Descola has become one of the most important anthropologists working today, and Beyond Nature and Culture has been a major influence in European intellectual life since its French publication in 2005. Here, finally, it is brought to English-language readers. At its heart is a question central to both anthropology and philosophy: what is the relationship between nature and culture? Culture—as a collective human making, of art, language, and so forth—is often seen as essentially different from nature, which is portrayed (...) as a collective of the nonhuman world, of plants, animals, geology, and natural forces. Descola shows this essential difference to be, however, not only a specifically Western notion, but also a very recent one. Drawing on ethnographic examples from around the world and theoretical understandings from cognitive science, structural analysis, and phenomenology, he formulates a sophisticated new framework, the “four ontologies”— animism, totemism, naturalism, and analogism—to account for all the ways we relate ourselves to nature. By thinking beyond nature and culture as a simple dichotomy, Descola offers nothing short of a fundamental reformulation by which anthropologists and philosophers can see the world afresh. (shrink)
Jean-Luc Nancy discusses his life's work with Pierre-Philippe Jandin. As Nancy looks back on his philosophical texts, he thinks anew about democracy, community, jouissance, love, Christianity, and the arts.
This article presents results of exploratory research conducted with managers from over 500 Norwegian companies to examine corporate motives for engaging in social initiatives. Three key questions were addressed. First, what do managers in this sample see as the primary reasons their companies engage in activities that benefit society? Second, do motives for such social initiative vary across the industries represented? Third, can further empirical support be provided for the theoretical classifications of social initiative motives outlined in the literature? Previous (...) research on the topic is reviewed, study methods are described, results, are presented, and implications of findings are discussed. The article concludes with the analysis of study limitations and directions for future research. (shrink)
The concept of the cortical column refers to vertical cell bands with similar response properties, which were initially observed by Vernon Mountcastle’s mapping of single cell recordings in the cat somatic cortex. It has subsequently guided over 50 years of neuroscientific research, in which fundamental questions about the modularity of the cortex and basic principles of sensory information processing were empirically investigated. Nevertheless, the status of the column remains controversial today, as skeptical commentators proclaim that the vertical cell bands are (...) a functionally insignificant by-product of ontogenetic development. This paper inquires how the column came to be viewed as an elementary unit of the cortex from Mountcastle’s discovery in 1955 until David Hubel and Torsten Wiesel’s reception of the Nobel Prize in 1981. I first argue that Mountcastle’s vertical electrode recordings served as criteria for applying the column concept to electrophysiological data. In contrast to previous authors, I claim that this move from electrophysiological data to the phenomenon of columnar responses was concept-laden, but not theory-laden. In the second part of the paper, I argue that Mountcastle’s criteria provided Hubel Wiesel with a conceptual outlook, i.e. it allowed them to anticipate columnar patterns in the cat and macaque visual cortex. I argue that in the late 1970s, this outlook only briefly took a form that one could call a ‘theory’ of the cerebral cortex, before new experimental techniques started to diversify column research. I end by showing how this account of early column research fits into a larger project that follows the conceptual development of the column into the present. (shrink)
Today, one out of every six children suffers from some form of neurodevelopmental abnormality. The causes are mostly unknown. Some environmental chemicals are known to cause brain damage and many more are suspected of it, but few have been tested for such effects. Philippe Grandjean provides an authoritative and engaging analysis of how environmental hazards can damage brain development and what we can do about it. The brain's development is uniquely sensitive to toxic chemicals, and even small deficits may negatively (...) impact our academic achievements, economic success, risk of delinquency, and quality of life. Chemicals such as mercury, polychlorinated biphenyls, arsenic, and certain pesticides pose an insidious threat to the development of the next generation's brains. When chemicals in the environment affect the development of a child's brain, he or she is at risk for mental retardation, cerebral palsy, autism, ADHD, and a range of learning disabilities and other deficits that will remain for a lifetime.We can halt chemical brain drain and protect the next generation, however, and Grandjean tells us how. First, we need to control all of the 200 industrial chemicals that have already been proven to affect brain functions in adults, as their effects on the developing brain are likely even worse. We must also push for routine testing for brain toxicity, stricter regulation of chemical emissions, and more required disclosure on the part of industries who unleash hazardous chemicals into products and the environment. Decisions can still be made to protect the brains of future generations."In his crisply written, deeply documented book, Dr. Philippe Grandjean, renowned physician and public health specialist, describes the exquisite vulnerability of the developing human brain to toxic chemicals in the environment, a vulnerability that he ascribes to the brain's almost unimaginable complexity. Today, nearly one in 6 children is born with a neurodevelopmental disorder - a birth defect of the brain. One in 8 has attention deficit disorder. One in 68 is diagnosed with autism spectrum disorder. These rates are far higher than those of a generation ago, and, although they are less publicized, the problems are more prevalent than those caused by thalidomide in the 1960's. The increases are far too rapid to be genetic. They cannot be explained by better diagnosis. How then could they have come to be? Dr. Grandjean has a diagnosis -- the thousands of toxic chemicals that have been released to the environment in the past 40 years with no testing for toxicity. David P. Rall, former Director of the US National Institute of Environmental Health Sciences, once stated that 'If thalidomide had caused a ten-point loss of IQ rather than obvious birth defects of the limbs, it would probably still be on the market'. This is the core message of Dr. Grandjean's 'must read' book." - Philip J. Landrigan, Dean for Global Health, Ethel H. Wise Professor and Chairman and Director, Children's Environmental Health Center, Mount Sinai School of Medicine. (shrink)
In 1981, David Hubel and Torsten Wiesel received the Nobel Prize for their research on cortical columns—vertical bands of neurons with similar functional properties. This success led to the view that “cortical column” refers to the basic building block of the mammalian neocortex. Since the 1990s, however, critics questioned this building block picture of “cortical column” and debated whether this concept is useless and should be replaced with successor concepts. This paper inquires which experimental results after 1981 challenged the building (...) block picture and whether these challenges warrant the elimination “cortical column” from neuroscientific discourse. I argue that the proliferation of experimental techniques led to a patchwork of locally adapted uses of the column concept. Each use refers to a different kind of cortical structure, rather than a neocortical building block. Once we acknowledge this diverse-kinds picture of “cortical column”, the elimination of column concept becomes unnecessary. Rather, I suggest that “cortical column” has reached conceptual retirement: although it cannot be used to identify a neocortical building block, column research is still useful as a guide and cautionary tale for ongoing research. At the same time, neuroscientists should search for alternative concepts when studying the functional architecture of the neocortex. keywords: Cortical column, conceptual development, history of neuroscience, patchwork, eliminativism, conceptual retirement. (shrink)
is an extension of public announcement logic. It is based on a modal operator that expresses what is true after any arbitrary announcement. An incorrect Truth Lemma has been stated and ‘demonstrated’ in Balbiani et al. . In this paper, we put right the wording and the proof of the Truth Lemma for.
Husserl’s transcendental phenomenology is first and foremost a science of the structures of consciousness. Since it is intended to yield eidetic, i. e., a priori insights, it is often assumed that transcendental phenomenology and the natural sciences are totally detached from each other such that phenomenological investigations cannot possibly benefit from empirical evidence. The aim of this paper is to show that a beneficial relationship is possible. To be more precise, I will show how Husserl’s a priori investigations on consciousness (...) can be supplemented by research in experimental psychology in order to tackle fundamental questions in epistemology. Our result will be a phenomenological conception of experiential justification that is in accordance with and supported by empirical phenomena such as perceptual learning and the phenomenon of blindsight. Finally, I shall shed light on the systematic limits of empirical research. (shrink)
Ariès traces Western man's attitudes toward mortality from the early medieval conception of death as the familiar collective destiny of the human race to the modern tendency, so pronounced in industrial societies, to hide death as if it were an embarrassing family secret.
I will present and criticise the two theories of truthmaking David Armstrong offers us in Truth and Truthmakers (Armstrong 2004), show to what extent they are incompatible and identify troublemakers for both of them, a notorious – Factualism, the view that the world is a world of states of affairs – and a more recent one – the view that every predication is necessary. Factualism, combined with truthmaker necessitarianism – ‘truthmaking is necessitation’ – leads Armstrong to an all-embracing totality state (...) of affairs that necessitates not only everything that is the case but also everything else – that which is not the case, that which is merely possible or even impossible. All the things so dear to realists – rocks, natural properties, real persons – become mere abstractions from this ontological monster. The view that every predication is necessary does in some sense the opposite: it does away with totality states of affairs and, arguably, also with states of affairs. We have particulars and universals, partially identical and necessarily connected to everything else. Just by the existence of anything, everything is necessitated – the whole world mirrored in every monad. Faced with the choice between these two equally unappealing alternatives, I suggest returning to Armstrong’s more empiricist past: the world is not an all-inclusive One, nor necessitated by every single particular and every single universal, but a plurality of particulars and universals, interconnected by a contingent and internal relation of exemplification. While a close variant, truthmaker essentialism, can perhaps be saved, this means giving up on truthmaker necessitarianism. This, I think, what it takes to steer a clear empiricist course between the Scylla of Spinozist general factness and the Charybdis of a Leibnizian overdose of brute necessities. (shrink)
Linguistic intuitive judgements are the de facto data source of choice within generative linguistics. But why we are justified in relying on intuitive judgements as evidence for grammars? In the philosophy of linguistics, this question has been hotly debated. I argue that the three most prominent views of that debate all have their problems. Devitt’s Modest Explanation accounts for the wrong kind of intuitive judgements. The Voice of Competence view and Rey’s account both lack independent evidence. I introduce and defend (...) a novel proposal that accounts for the evidential role of linguistic intuitive judgements and avoids these shortcomings. On this account, linguistic intuitive judgements are reports of the speaker’s immediate experience of trying to comprehend the sentence. This experience is due to the speaker’s linguistic competence, at least in part, and so the justification for the evidential use of linguistic intuitions ultimately comes from the speaker’s competence. However, the account does not rely on any special input from the speaker’s competence being available as the basis for linguistic intuitive judgements. (shrink)
Environmental pollutants such as lead, mercury, and pesticides interfere with brain development, yet we do not test industrial chemicals for brain toxicity. In this book, Philippe Grandjean argues for the necessity of protecting the brains of future generations and proposes a plan of action to halt what he refers to as chemical brain drain.
Quelle politique peut-on faire quand on est un idiot? Loin d'être saugrenue, c'est bien la question qu'on est conduit à se poser inévitablement en lisant l'oeuvre de Gilles Deleuze. L'"idiot" joue, en effet, un rôle incontournable et essentiel dans la philosophie de Deleuze. Il est le personnage conceptuel qui fait tenir cette philosophie dans sa consistance propre. Il se situe à la charnière de l'image de la pensée que le philosophe invoque et suppose plus ou moins implicitement et de la (...) création de concepts qu'il produit explicitement. Aussi, faire de la philosophie, tout comme agir et penser politiquement, c'est toujours une manière de faire l'idiot. Les conséquences de cette approche sont capitales, du fait des questions posées à la réflexion politique classique centrée sur le Droit et l'Etat, du fait aussi des problématiques qui se voient invalidées dans ce champ. La place qui revient au contrôle et au bio-pouvoir, aux zones d'indétermination et aux espaces lisses, permet de prendre une saine distance vis-à-vis de la politique "majoritaire" et de faire apparaître, comme déjà de vieux clichés, bon nombre de thèmes et de concepts revendiqués par les organisations alternatives, mondialistes et subversives récentes. C'est une tout autre idée de la politique, centrée sur les devenirs, que nous invite à méditer la politique deleuzienne de l'idiot. Elle nous ouvre à d'autres espérances, nouvelles, en lien avec un "peuple absent" qui naîtrait avec chaque devenir. (shrink)
Polysemous concepts with multiple related meanings pervade natural languages, yet some philosophers argue that we should eliminate them to avoid miscommunication and pointless debates in scientific discourse. This paper defends the legitimacy of polysemous concepts in science against this eliminativist challenge. My approach analyses such concepts as patchworks with multiple scale-dependent, technique-involving, domain-specific and property-targeting uses (patches). I demonstrate the generality of my approach by applying it to "hardness" in materials science, "homology" in evolutionary biology, "gold" in chemistry and "cortical (...) column" in neuroscience. Such patchwork concepts are legitimate if the techniques used to apply them produce reliable results, the domains to which they are applied are homogenous, and the properties they refer to are significant to describe, classify or explain the behavior of entities in the extension of the concept. By following these normative constraints, researchers can avoid miscommunication and pointless debates without having to eliminate polysemous patchwork concepts in scientific discourse. (shrink)
This paper concerns Aristotle's kind‐crossing prohibition. My aim is twofold. I argue that the traditional accounts of the prohibition are subject to serious internal difficulties and should be questioned. According to these accounts, Aristotle's prohibition is based on the individuation of scientific disciplines and the general kind that a discipline is about, and it says that scientific demonstrations must not cross from one discipline, and corresponding kind, to another. I propose a very different account of the prohibition. The prohibition is (...) based on Aristotle's scientific and metaphysical essentialism, according to which a scientific demonstration must take as its starting point a set of per se properties of a subject, if these make up a single, unitary definition. The subject of demonstration here is a kind, although not the general kind associated with a discipline, but rather the particular kind that the particular demonstration is about. (shrink)
The focus of this paper are the ethical, legal and social challenges for ensuring the responsible use of “big brain data”—the recording, collection and analysis of individuals’ brain data on a large scale with clinical and consumer-directed neurotechnological devices. First, I highlight the benefits of big data and machine learning analytics in neuroscience for basic and translational research. Then, I describe some of the technological, social and psychological barriers for securing brain data from unwarranted access. In this context, I then (...) examine ways in which safeguards at the hardware and software level, as well as increasing “data literacy” in society, may enhance the security of neurotechnological devices and protect the privacy of personal brain data. Regarding ethical and legal ramifications of big brain data, I first discuss effects on the autonomy, the sense of agency and authenticity, as well as the self that may result from the interaction between users and intelligent, particularly closed-loop, neurotechnological devices. I then discuss the impact of the “datafication” in basic and clinical neuroscience research on the just distribution of resources and access to these transformative technologies. In the legal realm, I examine possible legal consequences that arises from the increasing abilities to decode brain states and their corresponding subjective phenomenological experiences on the hitherto inaccessible privacy of these information. Finally, I discuss the implications of big brain data for national and international regulatory policies and models of good data governance. (shrink)
This book aims to make the pragmatist intellectual framework accessible to organization and management scholars. It presents some fundamental concepts of Pragmatism, their potential application to the study of organizations and the resulting theoretical, methodological, and practical issues.