A major voice in late twentieth-century philosophy, Alan Donagan is distinguished for his theories on the history of philosophy and the nature of morality. The Philosophical Papers of Alan Donagan, volumes 1 and 2, collect 28 of Donagan's most important and best-known essays on historical understanding and ethics from 1957 to 1991. Volume 2 addresses issues in the philosophy of action and moral theory. With papers on Kant, von Wright, Sellars, and Chisholm, this volume also covers a range (...) of questions in applied ethics--from the morality of Truman's decision to drop atomic bombs on Hiroshima and Nagasaki to ethical questions in medicine and law. (shrink)
This paper concerns Alan Turing’s ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Gödel and other logicians, including Turing himself, had shown that no finite set of rules could be used to generate all true mathematical statements. Yet according to Turing, there was no upper bound to the number of mathematical truths provable by intelligent human beings, for they could invent new rules and methods of proof. So, the output of a human mathematician, (...) for Turing, was not a computable sequence (i.e., one that could be generated by a Turing machine). Since computers only contained a finite number of instructions (or programs), one might argue, they could not reproduce human intelligence. Turing called this the “mathematical objection” to his view that machines can think. Logico-mathematical reasons, stemming from his own work, helped to convince Turing that it should be possible to reproduce human intelligence, and eventually compete with it, by developing the appropriate kind of digital computer. He felt it should be possible to program a computer so that it could learn or discover new rules, overcoming the limitations imposed by the incompleteness and undecidability results in the same way that human mathematicians presumably do. (shrink)
Alan Shewmons article, The brain and somatic integration: Insights into the standard biological rationale for equating brain death with death (2001), strikes at the heart of the standard justification for whole brain death criteria. The standard justification, which I call the standard paradigm, holds that the permanent loss of the functions of the entire brain marks the end of the integrative unity of the body. In my response to Shewmons article, I first offer a brief summary of the standard (...) paradigm and cite recent work by advocates of whole brain criteria who tenaciously cling to the standard paradigm despite increasing evidence showing that it has significant weaknesses. Second, I address Shewmons case against the standard paradigm, arguing that he is successful in showing that whole brain dead patients have integrated organic unity. Finally, I discuss some minor problems with Shewmons article, along with suggestions for further elaboration. (shrink)
Alan Gewirth's Reason and Morality , in which he set forth the Principle of Generic Consistency, is a major work of modern ethical theory that, though much debated and highly respected, has yet to gain full acceptance. Deryck Beyleveld contends that this resistance stems from misunderstanding of the method and logical operations of Gewirth's central argument. In this book Beyleveld seeks to remedy this deficiency. His rigorous reconstruction of Gewirth's argument gives its various parts their most compelling formulation and (...) clarifies its essential logical structure. Beyleveld then classifies all the criticisms that Gewirth's argument has received and measures them against his reconstruction of the argument. The overall result is an immensely rich picture of the argument, in which all of its complex issues and key moves are clearly displayed and its validity can finally be discerned. The comprehensiveness of Beyleveld's treatment provides ready access to the entire debate surrounding the foundational argument of Reason and Morality . It will be required reading for all who are interested in Gewirth's theory and deontological ethics and will be of central importance to moral and legal theorists. (shrink)
D. Alan Shewmon has advanced a well-documented challenge to the widely accepted total brain death criterion for death of the human being. We show that Shewmon's argument against this criterion is unsound, though he does refute the standard argument for that criterion. We advance a distinct argument for the total brain death criterion and answer likely objections. Since human beings are rational animals – sentient organisms of a specific type – the loss of the radical capacity for sentience (the (...) capacity to sense or to develop the capacity to sense) involves a substantial change, the passing away of the human organism. In human beings total brain death involves the complete loss of the radical capacity for sentience, and so in human beings total brain death is death. (shrink)
As is well known, Alan Turing drew a line, embodied in the "Turing test," between intellectual and physical abilities, and hence between cognitive and natural sciences. Less familiarly, he proposed that one way to produce a "passer" would be to educate a "child machine," equating the experimenter's improvements in the initial structure of the child machine with genetic mutations, while supposing that the experimenter might achieve improvements more expeditiously than natural selection. On the other hand, in his foundational "On (...) the chemical basis of morphogenesis," Turing insisted that biological explanation clearly confine itself to purely physical and chemical means, eschewing vitalist and teleological talk entirely and hewing to D'Arcy Thompson's line that "evolutionary 'explanations,'" are historical and narrative in character, employing the same intentional and teleological vocabulary we use in doing human history, and hence, while perhaps on occasion of heuristic value, are not part of biology as a natural science. To apply Turing's program to recent issues, the attempt to give foundations to the social and cognitive sciences in the "real science" of evolutionary biology (as opposed to Turing's biology) is neither to give foundations, nor to achieve the unification of the social/cognitive sciences and the natural sciences. (shrink)
I live just off of Bell Road outside of Newburgh, Indiana, a small town of 3,000 people. A mile down the street Bell Road intersects with Telephone Road not as a modern reminder of a technology belonging to bygone days, but as testimony that this technology, now more than a century and a quarter old, is still with us. In an age that prides itself on its digital devices and in which the computer now equals the telephone as a medium (...) of communication, it is easy to forget the debt we owe to an era that industrialized the flow of information, that the light bulb, to pick a singular example, which is useful for upgrading visual information we might otherwise overlook, nonetheless remains the most prevalent of all modern day information technologies. Edison’s light bulb, of course, belongs to a different order of informational devices than the computer, but not so the telephone, not entirely anyway. Alan Turing, best known for his work on the Theory of Computation (1937), the Turing Machine (also 1937) and the Turing Test (1950), is often credited with being the father of computer science and the father of artificial intelligence. Less well-known to the casual reader but equally important is his work in computer engineering. The following lecture on the Automatic Computing Engine, or ACE, shows Turing in this different light, as a mechanist concerned with getting the greatest computational power from minimal hardware resources. Yet Turing’s work on mechanisms is often eclipsed by his thoughts on computability and his other theoretical interests. This is unfortunate for several reasons, one being that it obscures our picture of the historical trajectory of information technology, a second that it emphasizes a false dichotomy between “hardware” and “software” to which Turing himself did not ascribe but which has, nonetheless, confused researchers who study the nature of mind and intelligence for generations.. (shrink)
Englantilaisen yleisneron Alan Turingin kuoleman yllä lepää salaperäisyyden verho. On hyvin mahdollista, ettei kenenkään muun nykyajan ajattelijan kuolemaan liity yhtä paljon legendoja ja spekulaatioita. Kiistattomat tosiasiat ovat lyhykäisyydessään seuraavat: siivooja löysi Turingin kotoaan kuolleena 8. kesäkuuta 1954. Turingin todettiin kuolleen edellisenä iltana syanidimyrkytykseen, ja hänen viereltään löytyi puoliksi syöty omena. Hän oli kuollessaan 41-vuotias. Loppu on enemmän tai vähemmän arvailujen varassa.
The two books reviewed here are different efforts to embrace the vast subject called "social thought." The second edition of The Blackwell Dictionary of Modern Social Thought, edited by William Outhwaite with Alain Touraine, contains numerous updates; yet it also has some disadvantages compared to the first edition. Social Thought: From the Enlightenment to the Present, edited by Alan Sica, is a bold but controversial attempt at gathering in one anthology as many social thinkers as possible. Key Words: "social" (...) • social thought/theory • William Outhwaite • Alan Sica • explanation. (shrink)
For much of the second half of the 20th Century, the primary role logical empiricism played was that of the argumentative foil. The 'received view' on a given topic (especially in philosophy of science, logic, or language) was frequently identified with some supposedly dogmatic tenet of logical empiricism. However, during the last twenty-five years, scholars have paid serious, sustained attention to what the logical positivists, individually and collectively, actually said. Early scholarship on logical empiricism had to engage in heavy-duty PR (...) work: why should anyone study the now-discarded mixture of blunders and implausibilities collected under the label 'logical empiricism'? However, thanks to the efforts of the pioneers, people studying the logical empiricists today need not articulate an extended apologia for their chosen subject of study -- rather, they can simply get on with their work. Many of the best fruits of these recent labors are on display in The Cambridge Companion to Logical Empiricism (CCLE), edited by Alan Richardson and Thomas Uebel. (shrink)
In his short life, Alan Turing (1912-1954) made foundational contributions to philosophy, mathematics, biology, artificial intelligence, and computer science. He, as much as anyone, invented the digital electronic computer. From September, 1939 much of his work on computation was war-driven and brutally practical. He developed high speed computing devices needed to decipher German Enigma Machine messages to and from U-boats, countering the most serious threat by far to Britain's survival during World War Two. Yet few people have an image (...) of him. (shrink)
Explaining his now famous parody in Social Text's "Science Wars" issue, Alan Sokal writes in Dissent ("Afterword", Fall 1996): But why did I do it? I confess that I'm an unabashed Old Leftist who never quite understood how deconstruction was supposed to help the working class. And I'm a stodgy old scientist who believes, naively, that there exists an external world, that there exist objective truths about that world, and that my job is to discover some of them. (...) There is much to note in this "confession." Why choose a hoax on Social Text to make these points? Did Sokal believe its editors were unabashed deconstructionists who doubted the existence of an external world or that they were anti-science? If so, he has either misread the burden of its seventeen-year history or was capricious in his choice. If not, then he has perpetuated the saddest hoax of all: on himself. For the fact is that Social Text, of which I am a founder and in whose editorial collective I served until this year, has never been in the deconstructionist camp; nor do its editors or the preponderance of its contributors doubt the existence of a material world. What is at issue is whether our knowledge of it can possibly be free of social and cultural presuppositions. (shrink)
In his short life, Alan Turing (1912-1954) made foundational contributions to philosophy, mathematics, biology, artificial intelligence, and computer science. He, as much as anyone, invented and showed how to program the digital electronic computer. From September, 1939, his work on computation was war-driven and brutally practical. He developed high speed computing devices needed to decipher German Enigma Machine messages to and from U-boats, countering the most serious threat by far to Britain..
The origin of my article lies in the appearance of Copeland and Proudfoot's feature article in Scientific American, April 1999. This preposterous paper, as described on another page, suggested that Turing was the prophet of 'hypercomputation'. In their references, the authors listed Copeland's entry on 'The Church-Turing thesis' in the Stanford Encyclopedia. In the summer of 1999, I circulated an open letter criticising the Scientific American article. I included criticism of this Encyclopedia entry. This was forwarded (by Prof. Sol Feferman) (...) to Prof. Ed Zalta, editor of the Encyclopedia, and after some discussion he invited me to submit an entry on 'Alan Turing.'. (shrink)
Alan Musgrave has been one of the most important philosophers of science in the last quarter of the 20th century. He has exemplified an exceptional combination of clearheaded and profound philosophical thinking. Two seem to be the pillars of his thought: an uncompromising commitment to scientific realism and an equally uncompromising commitment to deductivism. The essays reprinted in this volume (which span a period of 25 years, from 1974 to 1999) testify to these two commitments. (There are two omissions (...) from this collection: “Realism, Truth and Objectivity” in Realism and Anti-realism in the PhilosophyofScience(1996,Kluwer)and“HowtoDowithoutInductiveLogic” (Science&Educationvol.8,1999.Iwillmakesomereferencestothesepapersin what follows.) In the present review, instead of giving an orderly summary of the 16 papers of Essays, I discuss Musgrave’s two major commitments and raise some worries about their combination. (shrink)
In his short life, <span class='Hi'>Alan</span> Turing (1912-1954) made foundational contributions to philosophy, mathematics, biology, artificial intelligence, and computer science. He, as much as anyone, invented and showed how to program the digital electronic computer. From September, 1939, his work on computation was war-driven and brutally practical. He developed high speed computing devices needed to decipher German Enigma Machine messages to and from U-boats, countering the most serious threat by far to Britain=s survival during World War Two.
The mathematical genius Alan Turing (1912-1954) was one of the greatest scientists and thinkers of the 20th century. Now well known for his crucial wartime role in breaking the ENIGMA code, he was the first to conceive of the fundamental principle of the modern computer-the idea of controlling a computing machine's operations by means of a program of coded instructions, stored in the machine's 'memory'. In 1945 Turing drew up his revolutionary design for an electronic computing machine-his Automatic Computing (...) Engine ('ACE'). A pilot model of the ACE ran its first program in 1950 and the production version, the 'DEUCE', went on to become a cornerstone of the fledgling British computer industry. The first 'personal' computer was based on Turing's ACE. -/- Alan Turing's Automatic Computing Engine describes Turing's struggle to build the modern computer. The first detailed history of Turing's contributions to computer science, this text is essential reading for anyone interested in the history of the computer and the history of mathematics. It contains first hand accounts by Turing and by the pioneers of computing who worked with him. As well as relating the story of the invention of the computer, the book clearly describes the hardware and software of the ACE-including the very first computer programs. The book is intended to be accessible to everyone with an interest in computing, and contains numerous diagrams and illustrations as well as original photographs. -/- The book contains chapters describing Turing's path-breaking research in the fields of Artificial Intelligence (AI) and Artificial Life (A-Life). The book has an extensive system of hyperlinks to The Turing Archive for the History of Computing, an on-line library of digital facsimiles of typewritten documents by Turing and the other scientists who pioneered the electronic computer. (shrink)
The December 2008 White Paper (WP) on “Brain Death” published by the President’s Council on Bioethics (PCBE) reaffirmed its support for the traditional neurological criteria for human death. It spends considerable time explaining and critiquing what it takes to be the most challenging recent argument opposing the neurological criteria formulated by D. Alan Shewmon, a leading critic of the “whole brain death” standard. The purpose of this essay is to evaluate and critique the PCBE’s argument. The essay begins with (...) a brief background on the history of the neurological criteria in the United States and on the preparation of the 2008 WP. After introducing the WP’s contents, the essay sets forth Shewmon’s challenge to the traditional neurological criteria and the PCBE’s reply to Shewmon. The essay concludes by critiquing the WP’s novel justification for reaffirming the traditional conclusion, a justification the essay finds wanting. (shrink)
This is the second of two volumes of essays in commemoration of Alan Turing; it celebrates his intellectual legacy within the philosophy of mind and cognitive science. A distinguished international cast of contributors focus on the relationship beteen a scientific, computational image of the mind and a common-sense picture of the mind as an inner arena populated by concepts, beliefs, intentions, and qualia. Topics covered include the causal potency of folk-psychological states, the connectionist reconception of learning and concept formation, (...) the understanding of the notion of computation itself, and the relation between philosophical and psychological theories of concepts. -/- Also available in paperback is the companion volume, Machines and Thought, edited by Peter Millican and Andy Clark, which focuses on Turing's main innovations in artificial intelligence. (shrink)
In this paper we present the syntax and semantics of a temporal action language named Alan, which was designed to model interactive multimedia presentations where the Markov property does not always hold. In general, Alan allows the specification of systems where the future state of the world depends not only on the current state, but also on the past states of the world. To the best of our knowledge, Alan is the first action language (...) which incorporates causality with temporal formulas. In the process of defining the effect of actions we define the closure with respect to a path rather than to a state, and show that the non-Markovian model is an extension of the traditional Markovian model. Finally, we establish relationship between theories of Alan and logic programs. (shrink)
This is the first of two volumes of essays in commemoration of Alan Turing, whose pioneering work in the theory of artificial intelligence and computer science continues to be widely discussed today. A group of prominent academics from a wide range of disciplines focus on three questions famously raised by Turing: What, if any, are the limits on machine 'thinking'? Could a machine be genuinely intelligent? Might we ourselves be biological machines, whose thought consists essentially in nothing more than (...) the interaction of neurons according to strictly determined rules? The discussion of these fascinating issues is accessible to non-specialists and stimulating for all readers. -/- Also available in paperback is the companion volume: Connectionism, Concepts, and Folk Psychology, edited by Andy Clark and Peter Millican. While Volume 1 concentrates on Turing's main innovations in artificial intelligence, Volume 2 looks more broadly at his intellectual legacy in philosophy and cognitive science. (shrink)
Alan Gewirth's Reason and Morality directed philosophical attention to the possibility of presenting a rational and rigorous demonstration of fundamental moral principles. Now, these previously unpublished essays from some of the most distinguished philosophers of our generation subject Gewirth's program to thorough evaluation and assessment. In a tour de force of philosophical analysis, Professor Gewirth provides detailed replies to all of his critics--a major, genuinely clarifying essay of intrinsic philosophical interest.
Can torture be morally justified? I shall criticise arguments that have been adduced against torture and demonstrate that torture can be justified more easily than most philosophers dealing with the question are prepared to admit. It can be justified not only in ticking nuclear bomb cases but also in less spectacular ticking bomb cases and even in the socalled Dirty Harry cases. There is no morally relevant difference between self-defensive killing. of a culpable aggressor and torturing someone who is culpable (...) of a deadly threat that can be averted only by torturing him. Nevertheless, I shall argue that torture should not be institutionalised, for example by torture warrants. (shrink)
I imagine that people will complain that the account of normative concepts defended in Gibbard’s new book makes the metaethical waters even muddier because it blurs the line between cognitivism and noncognitivism and between realism and antirealism. However, these labels are philosophic tools, and in the wake of Gibbard’s new book, one might rightly conclude that there are new and better philosophical tools emerging on the metaethical scene. The uptake of views about practical reasoning—as exhibited by planning—into debates about the (...) (...) meaning of normative claims is a fruitful line of research. And, as Apt Feelings, Wise Choices made an initial bold step into novel and fruitful lines of research into the connection between psychology and morality, Thinking How to Live has also made an initial bold step into novel and fruitful lines of research into the connection between practical reasoning and normative semantics. (shrink)
One type of deflationism about metaphysical modality suggests that it can be analysed strictly in terms of linguistic or conceptual content and that there is nothing particularly metaphysical about modality. Scott Soames is explicitly opposed to this trend. However, a detailed study of Soames’s own account of modality reveals that it has striking similarities with the deflationary account. In this paper I will compare Soames’s account of a posteriori necessities concerning natural kinds with the deflationary one, specifically Alan Sidelle’s (...) account, and suggest that Soames’s account is vulnerable to the deflatonist’s critique. Furthermore, I conjecture that both the deflationary account and Soames’s account fail to fully explicate the metaphysical content of a posteriori necessities. Although I will focus on Soames, my argument may have more general implications towards the prospects of providing a meaning-based account of metaphysical modality. (shrink)
On the 27th of October, 1949, the Department of Philosophy at the University of Manchester organized a symposium "Mind and Machine", as Michael Polanyi noted in his Personal Knowledge (1974, p. 261). This event is known, especially among scholars of Alan Turing, but it is scarcely documented. Wolfe Mays (2000) reported about the debate, which he personally had attended, and paraphrased a mimeographed document that is preserved at the Manchester University archive. He forwarded a copy to Andrew Hodges and (...) B. Jack Copeland, who in then published it on their respective websites. The basis of this interpretation here is the copy preserved in the Regenstein Library of the University of Chicago, Special Collections, Polanyi Collection (abbreviated RPC, box 22, folder 19). The same collection holds the mimeographed statement that Polanyi prepared for this symposium: "Can the mind be represented by a machine?" This text has not been studied by Polanyi scholars. (shrink)
My theory of biocentric consequentialism is first shown not to be significantly inegalitarian, despite not advocating treating all creatures equally. I then respond to Carter's objections concerning population, species extinctions, the supposed minimax implication, endangered interests, autonomy and thought-experiments. Biocentric consequentialism is capable of supporting a sustainable human population at a level compatible with preserving most non-human species, as opposed to catastrophic population increases or catastrophic decimation. Nor is it undermined by the mere conceivable possibility of counter-intuitive implications. While Carter (...) shows that value-pluralism need not be riddled with contradictions, his version still introduces some, and faces further problems. Thus consequentialist theories may be needed to sift our values, at least if our values are commensurable. Carter's apparent suggestion that monistic theories such as biocentric consequentialism can never be harnessed to rich theories of value and must each myopically give undue prominence to a single value is questioned. (shrink)
We investigate Turing's contributions to computability theory for real numbers and real functions presented in [22, 24, 26]. In particular, it is shown how two fundamental approaches to computable analysis, the so-called ‘Type-2 Theory of Effectivity' (TTE) and the ‘realRAM machine' model, have their foundations in Turing's work, in spite of the two incompatible notions of computability they involve. It is also shown, by contrast, how the modern conceptual tools provided by these two paradigms allow a systematic interpretation of Turing's (...) pioneering work in the subject. (shrink)
It is not widely realised that Turing was probably the first person to consider building computing machines out of simple, neuron-like elements connected together into networks in a largely random manner. Turing called his networks unorganised machines. By the application of what he described as appropriate interference, mimicking education an unorganised machine can be trained to perform any task that a Turing machine can carry out, provided the number of neurons is sufficient. Turing proposed simulating both the behaviour of the (...) network and the training process by means of a computer program. We outline Turing's connectionist project of 1948. (shrink)