Background: Due to recent legislations on euthanasia and its current practice in the Netherlands and Belgium, issues of end-of-life medicine have become very vital in many European countries. In 2002, the Ethics Working Group of the German Association for Palliative Medicine has conducted a survey among its physician members in order to evaluate their attitudes towards different end-of-life medical practices, such as euthanasia, physician-assisted suicide, and terminal sedation. Methods: An anonymous questionnaire was sent to the 411 DGP physicians, consisting of (...) 14 multiple choice questions on positions that might be adopted in different hypothetical scenarios on situations of “intolerable suffering” in end-of-life care. For the sake of clarification, several definitions and legal judgements of different terms used in the German debate on premature termination of life were included. For statistical analysis t-tests and Pearson-correlations were used. Results: The response rate was 61%. The proportions of the respondents who were opposed to legalizing different forms of premature termination of life were: 90% opposed to EUT, 75% to PAS, 94% to PAS for psychiatric patients. Terminal sedation was accepted by 94% of the members. The main decisional bases drawn on for the answers were personal ethical values, professional experience with palliative care, knowledge of alternative approaches, knowledge of ethical guidelines and of the national legal frame. Conclusions: In sharp contrast to similar surveys conducted in other countries, only a minority of 9.6% of the DGP physicians supported the legalization of EUT. The misuse of medical knowledge for inhumane killing in the Nazi period did not play a relevant role for the respondents’ negative attitude towards EUT. Palliative care needs to be stronger established and promoted within the German health care system in order to improve the quality of end-of-life situations which subsequently is expected to lead to decreasing requests for EUT by terminally ill patients. (shrink)
Argues that the key distinction between human and nonhuman social cognition consists in our complex, diverse and flexible capacities to shape each other's minds in ways that make them easier to interpret.
The aim of this note is to show (Theorem 1.6) that in each of the cases: = {, }, or {, , }, or {, , } there are uncountably many -intermediate logics which are not finitely approximable. This result together with the results known in literature allow us to conclude (Theorem 2.2) that for each : either all -intermediate logics are finitely approximate or there are uncountably many of them which lack the property.
This paper will focus on what is perhaps the most characteristic doctrine of Numenius: that which taking into account what is said in the Second Letter attributed to Plato, affirms the existence of three gods. By analyzing some preserved fragments, I try to offer an overview of numenian theology to thereby specify the particularity of Numenius’ thought in the context of Platonism of his time, fact that makes him one of the most relevant predecessors of Plotinus’s system.
We maximally extend the quantum‐mechanical results of Muller and Saunders ( 2008 ) establishing the ‘weak discernibility’ of an arbitrary number of similar fermions in finite‐dimensional Hilbert spaces. This confutes the currently dominant view that ( A ) the quantum‐mechanical description of similar particles conflicts with Leibniz’s Principle of the Identity of Indiscernibles (PII); and that ( B ) the only way to save PII is by adopting some heavy metaphysical notion such as Scotusian haecceitas or Adamsian primitive thisness. (...) We take sides with Muller and Saunders ( 2008 ) against this currently dominant view, which has been expounded and defended by many. *Received July 2008; revised May 2009. †To contact the authors, please write to: F. A. Muller, Faculty of Philosophy, Erasmus University Rotterdam, Burg. Oudlaan 50, H5–16, 3062 PA Rotterdam, The Netherlands; e‐mail: f.a.muller@fwb.eur.nl , and Institute for the History and Foundations of Science, Utrecht University, Budapestlaan 6, IGG–3.08, 3584 CD Utrecht, The Netherlands; e‐mail: f.a.muller@uu.nl . M. P. Seevinck, Institute for the History and Foundations of Science, Utrecht University, Budapestlaan 6, IGG–3.08, 3584 CD Utrecht, The Netherlands; e‐mail: m.p.seevinck@uu.nl. (shrink)
We propose a new schema for the deduction theorem and prove that the deductive system S of a prepositional logic L fulfills the proposed schema if and only if there exists a finite set A(p, q) of propositional formulae involving only prepositional letters p and q such that A(p, p) L and p, A(p, q) s q.
This paper follows up a debate as to whether classical electrodynamics is inconsistent. Mathias Frisch makes the claim in Inconsistency, Asymmetry and Non-Locality ([2005]), but this has been quickly countered by F. A. Muller ([2007]) and Gordon Belot ([2007]). Here I argue that both Muller and Belot fail to connect with the background assumptions that support Frisch's claim. Responding to Belot I explicate Frisch's position in more detail, before providing my own criticisms. Correcting Frisch's position, I find that (...) I can present the theory in a way both authors can agree upon. Differences then manifest themselves purely within the reasoning methods employed. Introduction Features of the Theory Frisch's Inconsistency Claim Defending Frisch 4.1 Muller 4.2 Belot Difficulties for Frisch and a Compromise Conclusion CiteULike Connotea Del.icio.us What's this? (shrink)
One of the reasons provided for the shift away from an ontology for physical reality of material objects & properties towards one of physical structures & relations (Ontological Structural Realism: OntSR) is that the quantum-mechanical description of composite physical systems of similar elementary particles entails they are indiscernible. As material objects, they 'whither away', and when they wither away, structures emerge in their stead. We inquire into the question whether recent results establishing the weak discernibility of elementary particles pose a (...) threat for this quantum-mechanical reason for OntSR, because precisely their newly discovered discernibility prevents them from 'whithering away'. We argue there is a straightforward manner to consider the recent results as a reason in favour of OntSR rather than against it. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The key issues this (...) volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of Artificial Intelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
A generalized Wittgensteinian semantics for propositional languages is presented, based on a lattice of elementary situations. Of these, maximal ones are possible worlds, constituting a logical space; minimal ones are logical atoms, partitioned into its dimensions. A verifier of a proposition is an elementary situation such that if real it makes true. The reference (or objective) of a proposition is a situation, which is the set of all its minimal verifiers. (Maximal ones constitute its locus.) Situations are shown to form (...) a Boolean algebra, and the Boolean set algebra of loci is its representation. Wittgenstein's is a special case, admitting binary dimensions only. (shrink)
Counter to the popular impression that Adam Smith was a champion of selfishness and greed, Jerry Muller shows that the Inquiry into the Nature and Causes of the Wealth of Nations maintained that markets served to promote the well-being of ...
We inquire into the question whether the Aristotelean or classical \emph{ideal} of science has been realised by the Model Revolution, initiated at Stanford University during the 1950ies and spread all around the world of philosophy of science --- \emph{salute} P.\ Suppes. The guiding principle of the Model Revolution is: \emph{a scientific theory is a set of structures in the domain of discourse of axiomatic set-theory}, characterised by a set-theoretical predicate. We expound some critical reflections on the Model Revolution; the conclusions (...) will be that the philosophical problem of what a \emph{scientific theory} is has \emph{not} been solved yet --- \emph{pace} P.\ Suppes. While reflecting critically on the Model Revolution, we also explore a proposal of how to complete the Revolution and briefly address the intertwined subject of \emph{scientific representation}, which has come to occupy center stage in philosophy of science over the past decade. (shrink)
In a recent issue of this journal, M. Frisch claims to have proven that classical electrodynamics is an inconsistent physical theory. We argue that he has applied classical electrodynamics inconsistently. Frisch also claims that all other classical theories of electromagnetic phenomena, when consistent and in some sense an approximation of classical electrodynamics, are haunted by “serious conceptual problems” that defy resolution. We argue that this claim is based on a partisan if not misleading presentation of theoretical research in classical electrodynamics.
The paper applies the theory presented in A Formal Ontology of Situations (this journal, vol. 41 (1982), no. 4) to obtain a typology of metaphysical systems by interpreting them as different ontologies of situations. Four are treated in some detail: Hume's diachronic atomism, Laplacean determinism, Hume's synchronic atomism, and Wittgenstein's logical atomism. Moreover, the relation of that theory to the situation semantics of Perry and Barwise is discussed.
Most people who are regular consumers of psychoactive drugs are not drug addicts, nor will they ever become addicts. In neurobiological theories, non-addictive drug consumption is acknowledged only as a prerequisite for addiction, but not as a stable and widespread behavior in its own right. This target article proposes a new neurobiological framework theory for non-addictive psychoactive drug consumption, introducing the concept of Psychoactive drugs are consumed for their effects on mental states. Humans are able to learn that mental states (...) can be changed on purpose by drugs, in order to facilitate other, non-drug-related behaviors. We discuss specific and outline neurobiological mechanisms of how major classes of psychoactive drugs change mental states and serve non-drug-related behaviors. We argue that drug instrumentalization behavior may provide a functional adaptation to modern environments based on a historical selection for learning mechanisms that allow the dynamic modification of consummatory behavior. It is assumed that in order to effectively instrumentalize psychoactive drugs, the establishment of and retrieval from a drug memory is required. Here, we propose a new classification of different drug memory subtypes and discuss how they interact during drug instrumentalization learning and retrieval. Understanding the everyday utility and the learning mechanisms of non-addictive psychotropic drug use may help to prevent abuse and the transition to drug addiction in the future. (shrink)
Summary In 1979, Robert C. Olby published an article titled ?Mendel no Mendelian??, in which he questioned commonly held views that Gregor Mendel (1822?1884) laid the foundations for modern genetics. According to Olby, and other historians of science who have since followed him, Mendel worked within the tradition of so-called hybridists, who were interested in the evolutionary role of hybrids rather than in laws of inheritance. We propose instead to view the hybridist tradition as an experimental programme characterized by a (...) dynamic development that inadvertently led to a focus on the inheritance of individual traits. Through a careful analysis of publications on hybridization by Carl Linnaeus (1707?1778), Joseph Gottlieb Koelreuter (1733?1806), Carl Friedrich Gärtner (1772?1850), and finally Mendel himself, we will show that this development consisted in repeated reclassifications of hybrids to accommodate anomalies, which in the end allowed Mendel to draw analogies between whole organisms, individual traits, and ?elements? contained in reproductive cells. Mendel's achievement was a product of normal science, and yet a revolutionary step forward. This also explains why, in 1900, when the report he gave on his experiments was ?rediscovered?, Mendel could be read as a ?Mendelian? (shrink)
Historians and philosophers of science have interpreted the taxonomic theory of Carl Linnaeus as an ‘essentialist’, ‘Aristotelian’, or even ‘scholastic’ one. This interpretation is flatly contradicted by what Linnaeus himself had to say about taxonomy in Systema naturae , Fundamenta botanica and Genera plantarum . This paper straightens out some of the more basic misinterpretations by showing that: Linnaeus’s species concept took account of reproductive relations among organisms and was therefore not metaphysical, but biological; Linnaeus did not favour classification by (...) logical division, but criticized it for necessarily failing to represent what he called ‘natural’ genera; Linnaeus’s definitions of ‘natural’ genera and species were not essentialist, but descriptive and polytypic; Linnaeus’s method in establishing ‘natural’ definitions was not deductive, but consisted in an inductive, bottom-up procedure of comparing concrete specimens. The conclusion will discuss the fragmentary and provisional nature of Linnaeus’s ‘natural method’. I will argue in particular that Linnaeus opted for inductive strategies not on abstract epistemological grounds, but in order to confer stability and continuity to the explorative practices of contemporary natural history. (shrink)
[This is the short version of: Müller, Vincent C. and Bostrom, Nick (forthcoming 2016), ‘Future progress in artificial intelligence: A survey of expert opinion’, in Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence (Synthese Library 377; Berlin: Springer).] - - - In some quarters, there is intense concern about high–level machine intelligence and superintelligent AI coming up in a few dec- ades, bringing with it significant risks for human- ity; in other quarters, these issues are ignored or considered science (...) fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts. Overall, the results show an agreement among experts that AI systems will probably reach overall human ability around 2040-2050 and move on to superintelligence in less than 30 years thereafter. The experts say the probability is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
This historical analysis indicates that it is highly unlikely that the Nobel Prize winning research of Hermann J. Muller was peer-reviewed. The published paper of Muller lacked a research methods section, cited no references, and failed to acknowledge and discuss the work of Gager and Blakeslee that claimed to have induced gene mutation via ionizing radiation six months prior to Muller’s non-data Science paper :84-87, 1927a). Despite being well acclimated into the scientific world of peer-review, Muller (...) choose to avoid the peer-review process on his most significant publication. It appears that Muller’s actions were strongly influenced by his desire to claim primacy for the discovery of gene mutation. The actions of Muller have important ethical lessons and implications today, when self-interest trumps one’s obligations to society and the scientific culture that supports the quest for new knowledge and discovery. (shrink)
Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...) - The path to more general artificial intelligence - Ted Goertzel - pages 343-354 - - - Limitations and risks of machine ethics - Miles Brundage - pages 355-372 - - - Utility function security in artificially intelligent agents - Roman V. Yampolskiy - pages 373-389 - - - GOLEM: towards an AGI meta-architecture enabling both goal preservation and radical self-improvement - Ben Goertzel - pages 391-403 - - - Universal empathy and ethical bias for artificial general intelligence - Alexey Potapov & Sergey Rodionov - pages 405-416 - - - Bounding the impact of AGI - András Kornai - pages 417-438 - - - Ethics of brain emulations - Anders Sandberg - pages 439-457. (shrink)
The cell is not only the structural, physiological, and developmental unit of life, but also the reproductive one. So far, however, this aspect of the cell has received little attention from historians and philosophers of biology. I will argue that cell theory had far-reaching consequences for how biologists conceptualized the reproductive relationships between germs and adult organisms. Cell theory, as formulated by Theodor Schwann in 1839, implied that this relationship was a specific and lawful one, that is, that germs of (...) a certain kind, all else being equal, would produce adult organisms of the same kind, and vice versa. Questions of preformation and epigenesis took on a new meaning under this presupposition. The question then became one of whether cells could be considered as autonomous agents producing adult organisms of a given species, or whether they were the product of external, organizing forces and thus only a stage in the development of the whole organism. This question became an important issue for nineteenth-century biology. As I will demonstrate, it was the view of cells as autonomous agents which helped both Charles Darwin and Gregor Mendel to think of inheritance as a lawful process. (shrink)
The author endeavours to show two things: first, that Schrödingers (and Eckarts) demonstration in March (September) 1926 of the equivalence of matrix mechanics, as created by Heisenberg, Born, Jordan and Dirac in 1925, and wave mechanics, as created by Schrödinger in 1926, is not foolproof; and second, that it could not have been foolproof, because at the time matrix mechanics and wave mechanics were neither mathematically nor empirically equivalent. That they were is the Equivalence Myth. In order to make the (...) theories equivalent and to prove this, one has to leave the historical scene of 1926 and wait until 1932, when von Neumann finished his magisterial edifice. During the period 1926–1932 the original families of mathematical structures of matrix mechanics and of wave mechanics were stretched, parts were chopped off and novel structures were added. To Procrustean places we go, where we can demonstrate the mathematical, empirical and ontological equivalence of ‘the final versions of’ matrix mechanics and wave mechanics. -/- The present paper claims to be a comprehensive analysis of one of the pivotal papers in the history of quantum mechanics: Schrödingers equivalence paper. Since the analysis is performed from the perspective of Suppes structural view (‘semantic view’) of physical theories, the present paper can be regarded not only as a morsel of the internal history of quantum mechanics, but also as a morsel of applied philosophy of science. The paper is self-contained and presupposes only basic knowledge of quantum mechanics. For reasons of length, the paper is published in two parts; Part I appeared in the previous issue of this journal. Section 1 contains, besides an introduction, also the papers five claims and a preview of the arguments supporting these claims; so Part I, Section 1 may serve as a summary of the paper for those readers who are not interested in the detailed arguments. (shrink)
Beauchamp and Childress have performed a great service by strengthening the principle of respect for the patient's autonomy against the paternalism that dominated medicine until at least the 1970s. Nevertheless, we think that the concept of autonomy should be elaborated further. We suggest such an elaboration built on recent developments within the neurosciences and the free will debate. The reason for this suggestion is at least twofold: First, Beauchamp and Childress neglect some important elements of autonomy. Second, neuroscience itself needs (...) a conceptual apparatus to deal with the neural basis of autonomy for diagnostic purposes. This desideratum is actually increasing because modern therapy options can considerably influence the neural basis of autonomy itself.Sabine MNeuroScienceAndNorms: Ethical and Legal Aspects of Norms in Neuroimaging at Bonn University Hospital, Germany. Her main research interests are in neuroethics. She is coauthor of three German books about neuroethics and bioethics.Henrik Walter, M.D., Ph.D., is Full Professor of Medical Psychology at the University of Bonn, Germany, and vice-director of the Department of Psychiatry and Psychotherapy at the University Clinic of Bonn. He is author of Neurophilosophy of Free Will and editor of the book From Neuroethics to Neurolaw?. His research fields are biological psychiatry, cognitive neuroscience, neuroimaging, neurophilosophy, and neuroethics. (shrink)
Prompted by recent recognitions of the omnipresence of horizontal gene transfer among microbial species and the associated emphasis on exchange, rather than isolation, as the driving force of evolution, this essay will reflect on hybridization as one of the central concerns of nineteenth-century biology. I will argue that an emphasis on horizontal exchange was already endorsed by ‘biology’ when it came into being around 1800 and was brought to full fruition with the emergence of genetics in 1900. The true revolution (...) in nineteenth-century life sciences, I maintain, consisted in a fundamental shift in ontology, which eroded the boundaries between individual and species, and allowed biologists to move up and down the scale of organic complexity. Life became a property extending both ‘downwards’, to the parts that organisms were composed of, as well as ‘upwards’, to the collective entities constituted by the relations of exchange and interaction that organisms engage in to reproduce. This mode of thinking was crystallized by Gregor Mendel and consolidated in the late nineteenth-century conjunction of biochemistry, microbiology and breeding in agro-industrial settings. This conjunction and its implications are especially exemplified by Wilhelm Johannsen’s and Martinus Beijerinck’s work on pure lines and cultures. An understanding of the subsequent constraints imposed by the evolutionary synthesis of the twentieth century on models of genetic systems may require us to rethink the history of biology and displace Darwin’s theory of natural selection from that history’s centre. (shrink)
For over 30 years I have argued that we need to construe science as accepting a metaphysical proposition concerning the comprehensibility of the universe. In a recent paper, Fred Muller criticizes this argument, and its implication that Bas van Fraassen’s constructive empiricism is untenable. In the present paper I argue that Muller’s criticisms are not valid. The issue is of some importance, for my argument that science accepts a metaphysical proposition is the first step in a broader argument (...) intended to demonstrate that we need to bring about a revolution in science, and ultimately in academic inquiry as a whole so that the basic aim becomes wisdom and not just knowledge. (shrink)
A comprehensive biography which covers Adorno's life, work and times: from childhood, through to his student years, his years in emigration, his return to post-war Germany, his time in Frankfurt, his role as a public intellectual, and his ...