This book is a general introduction to the philosophy of John Locke, one of the most influential thinkers in modern times. Nicholas Jolley aims to show the fundamental unity of Locke's thought in his masterpiece, the Essay Concerning Human Understanding. In this work Locke advances a coherent theory of knowledge; as against Descartes he argues that knowledge is possible to the extent that it concerns essences which are constructions of the human mind.
The concept of an "idea" played a central role in 17th-century theories of mind and knowledge, but philosophers were divided over the nature of ideas. This book examines an important, but little-known, debate on this question in the work of Leibniz, Malebranche, and Descartes. Looking closely at the issues involved, as well as the particular context in which the debate took place, Jolley demonstrates that the debate has serious implications for a number of major topics in 17th-century philosophy.
Gottfried Wilhelm Leibniz (1646-1716) was hailed by Bertrand Russell as "one of the supreme intellects of all time." A towering figure in Seventeenth century philosophy, his complex thought has been championed and satirized in equal measure, most famously in Voltaire's Candide. In this outstanding introduction to his philosophy, Nicholas Jolley introduces and assesses the whole of Leibniz's philosophy. Beginning with an introduction to Leibniz's life and work, he carefully introduces the core elements of Leibniz's metaphysics: his theories of substance, (...) identity and individuation; monads and space and time; and his important debate over the nature of space and time with Newton's champion, Samuel Clarke. He then introduces Leibniz's theories of mind, knowledge, and innate ideas, showing how Lenin anticipated the distinction between conscious and unconscious states, before examining his theory of free will and the problem of evil. An important feature of the book is its introduction to Leibniz'smoral and political philosophy, an overlooked aspect of his work. The final chapter assesses legacy and the impact of his philosophy on philosophy as a whole, particularly on the work of Immanuel Kant. Throughout, Nicholas Jolley places Lenin in relation to some of the other great philosophers, such as Descartes, Spinoza and Locke, and discusses Leibniz's key works, such as the Monadology and Discourse on Metaphysics. (shrink)
This is the first modern interpretation of Leibniz's comprehensive critique of Locke, the New Essays on Human Understanding. Arguing that the New Essays is controlled by the overriding purpose of refuting Locke's alleged materialism, Jolley establishes the metaphysical and theological motivation of the work on the basis of unpublished correspondence and manuscript material. He also shows the relevance of Leibniz's views to contemporary debates over innate ideas, personal identity, and natural kinds.
A major voice in late twentieth-century philosophy, Alan Donagan is distinguished for his theories on the history of philosophy and the nature of morality. The Philosophical Papers of Alan Donagan, volumes 1 and 2, collect 28 of Donagan's most important and best-known essays on historical understanding and ethics from 1957 to 1991. Volume 2 addresses issues in the philosophy of action and moral theory. With papers on Kant, von Wright, Sellars, and Chisholm, this volume also covers a range (...) of questions in applied ethics--from the morality of Truman's decision to drop atomic bombs on Hiroshima and Nagasaki to ethical questions in medicine and law. (shrink)
The preceding article by Marc Bekoff reveals much about our current understanding of animal self-consciousness and its implications. It also reveals how much more there is to be said and considered. This response briefly examines animal self-consciousness from scientific, moral, and theological perspectives. As Bekoff emphasizes, self-consciousness is not one thing but many. Consequently, our moral relationship to animals is not simply one based on a graded hierarchy of abilities. Furthermore, the complexity of animal self-awareness can serve as stimulus (...) for thinking about issues of theodicy and soteriology in a broader sense. (shrink)
For scientific essentialists, the only logical possibilities of existence are the real (or metaphysical) ones, and such possibilities, they say, are relative to worlds. They are not a priori, and they cannot just be invented. Rather, they are discoverable only by the a posteriori methods of science. There are, however, many philosophers who think that real possibilities are knowable a priori, or that they can just be invented. Marc Lange [Lange 2004] thinks that they can be invented, and tries (...) to use his inventions to argue that the essentialist theory of counterfactual conditionals developed in Scientific Essentialism [Ellis 2001, hereafter SE] is flawed. (shrink)
Marc Lange’s new book on laws offers a restatement and development of the account he proposed in Natural Laws and Scientific Practice (Oxford University Press, 2000), henceforth NLSP, and the new material is helpfully summarized in the preface. Laws and Lawmakers presents the key idea from NLSP in a rather more reader-friendly manner – this idea being roughly that the difference between laws and accidents is that laws, unlike accidents, form a ‘stable’ set, i.e. a logically closed set of (...) truths such that they would all still hold under any counterfactual supposition consistent with the set. So, for example, the natural laws all still hold under counterfactual suppositions such as ‘had this match been struck …’, ‘had Bill Gates wanted to build a gold cube one mile across’ and so on; thus this set is stable. But the set of laws plus the accidental claim ‘there is no gold cube one mile across’ fails to hold under such counterfactual suppositions because had Bill Gates wanted to build a gold cube one mile across, such a cube might well have come into existence; thus this set is not stable. While the basic outline and defence of this idea is provided in Chapter 1, those wishing to delve into the intricate …. (shrink)
This paper concerns Alan Turing’s ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Gödel and other logicians, including Turing himself, had shown that no finite set of rules could be used to generate all true mathematical statements. Yet according to Turing, there was no upper bound to the number of mathematical truths provable by intelligent human beings, for they could invent new rules and methods of proof. So, the output of a human mathematician, (...) for Turing, was not a computable sequence (i.e., one that could be generated by a Turing machine). Since computers only contained a finite number of instructions (or programs), one might argue, they could not reproduce human intelligence. Turing called this the “mathematical objection” to his view that machines can think. Logico-mathematical reasons, stemming from his own work, helped to convince Turing that it should be possible to reproduce human intelligence, and eventually compete with it, by developing the appropriate kind of digital computer. He felt it should be possible to program a computer so that it could learn or discover new rules, overcoming the limitations imposed by the incompleteness and undecidability results in the same way that human mathematicians presumably do. (shrink)
Abstract. The aim of this paper is to reconstruct the debate on Begriffstheorie between Ernst Cassirer, the Swe¬dish philosopher Konrad Marc-Wogau, and, virtually, Moritz Schlick. It took place during in the late thirties when Cassirer had immigrated to Sweden. While Cassirer argued for a rich “constitutive” theory of concepts, Marc-Wogau, and, in a different way, Schlick favored “austere” non-con¬sti¬¬tutive theories of concepts. Ironically, however, Cassirer used Schlick’s account as a weapon to counter Marc-Wogau’s criticism of his rich (...) con¬¬sti¬tu¬¬tive theory of concepts. With the help of modern Formal Concept Theory (FCT) it can be shown, however, that Marc-Wogau’s argument is flawed. (shrink)
Alan Shewmons article, The brain and somatic integration: Insights into the standard biological rationale for equating brain death with death (2001), strikes at the heart of the standard justification for whole brain death criteria. The standard justification, which I call the standard paradigm, holds that the permanent loss of the functions of the entire brain marks the end of the integrative unity of the body. In my response to Shewmons article, I first offer a brief summary of the standard (...) paradigm and cite recent work by advocates of whole brain criteria who tenaciously cling to the standard paradigm despite increasing evidence showing that it has significant weaknesses. Second, I address Shewmons case against the standard paradigm, arguing that he is successful in showing that whole brain dead patients have integrated organic unity. Finally, I discuss some minor problems with Shewmons article, along with suggestions for further elaboration. (shrink)
Marc A. Hight has given us a well-researched, well-written, analytically rigorous and thoughtprovoking book about the development of idea ontology in the seventeenth and early eighteenth centuries. The book covers a great deal of material, some in significant depth, some not. The figures discussed include Descartes, Malebranche, Arnauld, Locke, Leibniz, Berkeley, and Hume. Some might think it a tall order for anyone to grapple with the central works of these figures on a subject as fundamental as the nature of (...) ideas. And while reading the book, I must admit to having had this thought a few times. Seventeen pages on Descartes’ theory of ideas, covering the development of his ontology of ideas, the distinction between formal reality and objective reality, the nature of mental representation, the contagion theory of causation, the doctrine of innate ideas as ungrounded dispositions, and the interactionism/occasionalism controversy? Wow. And yet Hight has done his homework. He knows the figures and the relevant interpretive controversies well, he focuses on many of the passages that are relevant to the book’s central thesis, and in the end offers us a compelling narrative as an alternative to what he identifies as “the traditional view of what transpired in the early modern period” (2). (shrink)
Alan Gewirth's Reason and Morality , in which he set forth the Principle of Generic Consistency, is a major work of modern ethical theory that, though much debated and highly respected, has yet to gain full acceptance. Deryck Beyleveld contends that this resistance stems from misunderstanding of the method and logical operations of Gewirth's central argument. In this book Beyleveld seeks to remedy this deficiency. His rigorous reconstruction of Gewirth's argument gives its various parts their most compelling formulation and (...) clarifies its essential logical structure. Beyleveld then classifies all the criticisms that Gewirth's argument has received and measures them against his reconstruction of the argument. The overall result is an immensely rich picture of the argument, in which all of its complex issues and key moves are clearly displayed and its validity can finally be discerned. The comprehensiveness of Beyleveld's treatment provides ready access to the entire debate surrounding the foundational argument of Reason and Morality . It will be required reading for all who are interested in Gewirth's theory and deontological ethics and will be of central importance to moral and legal theorists. (shrink)
D. Alan Shewmon has advanced a well-documented challenge to the widely accepted total brain death criterion for death of the human being. We show that Shewmon's argument against this criterion is unsound, though he does refute the standard argument for that criterion. We advance a distinct argument for the total brain death criterion and answer likely objections. Since human beings are rational animals – sentient organisms of a specific type – the loss of the radical capacity for sentience (the (...) capacity to sense or to develop the capacity to sense) involves a substantial change, the passing away of the human organism. In human beings total brain death involves the complete loss of the radical capacity for sentience, and so in human beings total brain death is death. (shrink)
As is well known, Alan Turing drew a line, embodied in the "Turing test," between intellectual and physical abilities, and hence between cognitive and natural sciences. Less familiarly, he proposed that one way to produce a "passer" would be to educate a "child machine," equating the experimenter's improvements in the initial structure of the child machine with genetic mutations, while supposing that the experimenter might achieve improvements more expeditiously than natural selection. On the other hand, in his foundational "On (...) the chemical basis of morphogenesis," Turing insisted that biological explanation clearly confine itself to purely physical and chemical means, eschewing vitalist and teleological talk entirely and hewing to D'Arcy Thompson's line that "evolutionary 'explanations,'" are historical and narrative in character, employing the same intentional and teleological vocabulary we use in doing human history, and hence, while perhaps on occasion of heuristic value, are not part of biology as a natural science. To apply Turing's program to recent issues, the attempt to give foundations to the social and cognitive sciences in the "real science" of evolutionary biology (as opposed to Turing's biology) is neither to give foundations, nor to achieve the unification of the social/cognitive sciences and the natural sciences. (shrink)
I live just off of Bell Road outside of Newburgh, Indiana, a small town of 3,000 people. A mile down the street Bell Road intersects with Telephone Road not as a modern reminder of a technology belonging to bygone days, but as testimony that this technology, now more than a century and a quarter old, is still with us. In an age that prides itself on its digital devices and in which the computer now equals the telephone as a medium (...) of communication, it is easy to forget the debt we owe to an era that industrialized the flow of information, that the light bulb, to pick a singular example, which is useful for upgrading visual information we might otherwise overlook, nonetheless remains the most prevalent of all modern day information technologies. Edison’s light bulb, of course, belongs to a different order of informational devices than the computer, but not so the telephone, not entirely anyway. Alan Turing, best known for his work on the Theory of Computation (1937), the Turing Machine (also 1937) and the Turing Test (1950), is often credited with being the father of computer science and the father of artificial intelligence. Less well-known to the casual reader but equally important is his work in computer engineering. The following lecture on the Automatic Computing Engine, or ACE, shows Turing in this different light, as a mechanist concerned with getting the greatest computational power from minimal hardware resources. Yet Turing’s work on mechanisms is often eclipsed by his thoughts on computability and his other theoretical interests. This is unfortunate for several reasons, one being that it obscures our picture of the historical trajectory of information technology, a second that it emphasizes a false dichotomy between “hardware” and “software” to which Turing himself did not ascribe but which has, nonetheless, confused researchers who study the nature of mind and intelligence for generations.. (shrink)
Englantilaisen yleisneron Alan Turingin kuoleman yllä lepää salaperäisyyden verho. On hyvin mahdollista, ettei kenenkään muun nykyajan ajattelijan kuolemaan liity yhtä paljon legendoja ja spekulaatioita. Kiistattomat tosiasiat ovat lyhykäisyydessään seuraavat: siivooja löysi Turingin kotoaan kuolleena 8. kesäkuuta 1954. Turingin todettiin kuolleen edellisenä iltana syanidimyrkytykseen, ja hänen viereltään löytyi puoliksi syöty omena. Hän oli kuollessaan 41-vuotias. Loppu on enemmän tai vähemmän arvailujen varassa.
The two books reviewed here are different efforts to embrace the vast subject called "social thought." The second edition of The Blackwell Dictionary of Modern Social Thought, edited by William Outhwaite with Alain Touraine, contains numerous updates; yet it also has some disadvantages compared to the first edition. Social Thought: From the Enlightenment to the Present, edited by Alan Sica, is a bold but controversial attempt at gathering in one anthology as many social thinkers as possible. Key Words: "social" (...) • social thought/theory • William Outhwaite • Alan Sica • explanation. (shrink)
For much of the second half of the 20th Century, the primary role logical empiricism played was that of the argumentative foil. The 'received view' on a given topic (especially in philosophy of science, logic, or language) was frequently identified with some supposedly dogmatic tenet of logical empiricism. However, during the last twenty-five years, scholars have paid serious, sustained attention to what the logical positivists, individually and collectively, actually said. Early scholarship on logical empiricism had to engage in heavy-duty PR (...) work: why should anyone study the now-discarded mixture of blunders and implausibilities collected under the label 'logical empiricism'? However, thanks to the efforts of the pioneers, people studying the logical empiricists today need not articulate an extended apologia for their chosen subject of study -- rather, they can simply get on with their work. Many of the best fruits of these recent labors are on display in The Cambridge Companion to Logical Empiricism (CCLE), edited by Alan Richardson and Thomas Uebel. (shrink)
After reviewing the status of the concept of the phenomenon in Husserl’s phenomenology and the aim of successive attempts to reform, de-formalize, and to widen it, we show the difficulties of a method that, following the example of Jean-Luc Marion’s phenomenology, intends to connect the phenomenon directly to the revelation of an exteriority. We argue that, on the contrary, Marc Richir’s phenomenology, which strives to grasp the phenomenon as nothing-but-phenomenon, is more likely to capture the “meaning” of the phenomenological (...) , and hence to help us orient in the field of problems that phenomenology encounters without always knowing how to tackle them. Yet, this extension of the phenomenon’s domain does not thereby encompass everything: there may well be certain issues that require a phenomenology without phenomenon ; but the meaning of this cannot be determined before the complete reenvisioning of transcendental phenomenology. (shrink)
In his short life, Alan Turing (1912-1954) made foundational contributions to philosophy, mathematics, biology, artificial intelligence, and computer science. He, as much as anyone, invented the digital electronic computer. From September, 1939 much of his work on computation was war-driven and brutally practical. He developed high speed computing devices needed to decipher German Enigma Machine messages to and from U-boats, countering the most serious threat by far to Britain's survival during World War Two. Yet few people have an image (...) of him. (shrink)
Explaining his now famous parody in Social Text's "Science Wars" issue, Alan Sokal writes in Dissent ("Afterword", Fall 1996): But why did I do it? I confess that I'm an unabashed Old Leftist who never quite understood how deconstruction was supposed to help the working class. And I'm a stodgy old scientist who believes, naively, that there exists an external world, that there exist objective truths about that world, and that my job is to discover some of them. (...) There is much to note in this "confession." Why choose a hoax on Social Text to make these points? Did Sokal believe its editors were unabashed deconstructionists who doubted the existence of an external world or that they were anti-science? If so, he has either misread the burden of its seventeen-year history or was capricious in his choice. If not, then he has perpetuated the saddest hoax of all: on himself. For the fact is that Social Text, of which I am a founder and in whose editorial collective I served until this year, has never been in the deconstructionist camp; nor do its editors or the preponderance of its contributors doubt the existence of a material world. What is at issue is whether our knowledge of it can possibly be free of social and cultural presuppositions. (shrink)
In his short life, Alan Turing (1912-1954) made foundational contributions to philosophy, mathematics, biology, artificial intelligence, and computer science. He, as much as anyone, invented and showed how to program the digital electronic computer. From September, 1939, his work on computation was war-driven and brutally practical. He developed high speed computing devices needed to decipher German Enigma Machine messages to and from U-boats, countering the most serious threat by far to Britain..
The origin of my article lies in the appearance of Copeland and Proudfoot's feature article in Scientific American, April 1999. This preposterous paper, as described on another page, suggested that Turing was the prophet of 'hypercomputation'. In their references, the authors listed Copeland's entry on 'The Church-Turing thesis' in the Stanford Encyclopedia. In the summer of 1999, I circulated an open letter criticising the Scientific American article. I included criticism of this Encyclopedia entry. This was forwarded (by Prof. Sol Feferman) (...) to Prof. Ed Zalta, editor of the Encyclopedia, and after some discussion he invited me to submit an entry on 'Alan Turing.'. (shrink)
Alan Musgrave has been one of the most important philosophers of science in the last quarter of the 20th century. He has exemplified an exceptional combination of clearheaded and profound philosophical thinking. Two seem to be the pillars of his thought: an uncompromising commitment to scientific realism and an equally uncompromising commitment to deductivism. The essays reprinted in this volume (which span a period of 25 years, from 1974 to 1999) testify to these two commitments. (There are two omissions (...) from this collection: “Realism, Truth and Objectivity” in Realism and Anti-realism in the PhilosophyofScience(1996,Kluwer)and“HowtoDowithoutInductiveLogic” (Science&Educationvol.8,1999.Iwillmakesomereferencestothesepapersin what follows.) In the present review, instead of giving an orderly summary of the 16 papers of Essays, I discuss Musgrave’s two major commitments and raise some worries about their combination. (shrink)
In his short life, <span class='Hi'>Alan</span> Turing (1912-1954) made foundational contributions to philosophy, mathematics, biology, artificial intelligence, and computer science. He, as much as anyone, invented and showed how to program the digital electronic computer. From September, 1939, his work on computation was war-driven and brutally practical. He developed high speed computing devices needed to decipher German Enigma Machine messages to and from U-boats, countering the most serious threat by far to Britain=s survival during World War Two.
The mathematical genius Alan Turing (1912-1954) was one of the greatest scientists and thinkers of the 20th century. Now well known for his crucial wartime role in breaking the ENIGMA code, he was the first to conceive of the fundamental principle of the modern computer-the idea of controlling a computing machine's operations by means of a program of coded instructions, stored in the machine's 'memory'. In 1945 Turing drew up his revolutionary design for an electronic computing machine-his Automatic Computing (...) Engine ('ACE'). A pilot model of the ACE ran its first program in 1950 and the production version, the 'DEUCE', went on to become a cornerstone of the fledgling British computer industry. The first 'personal' computer was based on Turing's ACE. -/- Alan Turing's Automatic Computing Engine describes Turing's struggle to build the modern computer. The first detailed history of Turing's contributions to computer science, this text is essential reading for anyone interested in the history of the computer and the history of mathematics. It contains first hand accounts by Turing and by the pioneers of computing who worked with him. As well as relating the story of the invention of the computer, the book clearly describes the hardware and software of the ACE-including the very first computer programs. The book is intended to be accessible to everyone with an interest in computing, and contains numerous diagrams and illustrations as well as original photographs. -/- The book contains chapters describing Turing's path-breaking research in the fields of Artificial Intelligence (AI) and Artificial Life (A-Life). The book has an extensive system of hyperlinks to The Turing Archive for the History of Computing, an on-line library of digital facsimiles of typewritten documents by Turing and the other scientists who pioneered the electronic computer. (shrink)
Health research initiatives worldwide are growing in scope and complexity, particularly as they move into the developing world. Expanding health research activity in low- and middle-income countries has resulted in a commensurate rise in the need for sound ethical review structures and functions in the form of Research Ethics Committees (RECs). Yet these seem to be lagging behind as a result of the enormous challenges facing these countries, including poor resource availability and lack of capacity. There is thus an urgent (...) need for ongoing capacity and resource development in these regions in general, and in Africa in particular. Similarly, there is a need for research and initiatives that can identify existing capacity and funding and indicate the areas where this needs to be developed.This discussion paper argues that the Mapping African Research Ethics Capacity (MARC) project is a timely initiative aimed at identifying existing capacity. MARC provides a platform and tool on the Council on Health Research for Development's (COHRED) Health Research website (HRWeb), which can be used by RECs and key stakeholders in health research in Africa to identify capacity, constraints and development needs. MARC intends to provide the first comprehensive interactive database of RECs in Africa, which will allow for the identification of key relationships and analyses of capacity. The potential of MARC lies in the mapping of current ethical review activity onto capacity needs. This paper serves as a starting point by providing a descriptive illustration of the current state of RECs in Africa. (shrink)
The December 2008 White Paper (WP) on “Brain Death” published by the President’s Council on Bioethics (PCBE) reaffirmed its support for the traditional neurological criteria for human death. It spends considerable time explaining and critiquing what it takes to be the most challenging recent argument opposing the neurological criteria formulated by D. Alan Shewmon, a leading critic of the “whole brain death” standard. The purpose of this essay is to evaluate and critique the PCBE’s argument. The essay begins with (...) a brief background on the history of the neurological criteria in the United States and on the preparation of the 2008 WP. After introducing the WP’s contents, the essay sets forth Shewmon’s challenge to the traditional neurological criteria and the PCBE’s reply to Shewmon. The essay concludes by critiquing the WP’s novel justification for reaffirming the traditional conclusion, a justification the essay finds wanting. (shrink)
This is the second of two volumes of essays in commemoration of Alan Turing; it celebrates his intellectual legacy within the philosophy of mind and cognitive science. A distinguished international cast of contributors focus on the relationship beteen a scientific, computational image of the mind and a common-sense picture of the mind as an inner arena populated by concepts, beliefs, intentions, and qualia. Topics covered include the causal potency of folk-psychological states, the connectionist reconception of learning and concept formation, (...) the understanding of the notion of computation itself, and the relation between philosophical and psychological theories of concepts. -/- Also available in paperback is the companion volume, Machines and Thought, edited by Peter Millican and Andy Clark, which focuses on Turing's main innovations in artificial intelligence. (shrink)
Among members of the legal profession and judiciary throughout the world, there is a genuine concern with establishing and maintaining high ethical standards. It is not difficult to understand why this should be so. Nor is it difficult to see the professional standards are not completely divorced from ordinary morality. Indeed, legal ethics and professional responsibility are more than a set of rules of good conduct; they are also a commitment to honesty, integrity, and service in the practice of law. (...) In order to ensure that the standards established are the right ones, it is necessary first of all to examine important philosophical and policy issues, such as the need to reconsider the boundaries between, on the one hand, a lawyer's obligation to a client and, on the other, the public interest. It is also to be appreciated that conflicts of interest are pervasive and that all too often they are so common that they are not recognized as such. Yet rarely is public policy clearly cut. -/- The underlying themes of this book are: BL that the move to more definite rules is not only inevitable but also desirable -/- BL that existing codes of professional practice cannot simply be treated as a system of specific rules -/- BL that the current set of ethical rules is contestable and requires further refinement, perhaps even radical surgery -/- BL and that legal ethics must be conceived in the more general area of professional responsibility -/- The wider ethical issues of the operation of the legal profession as a whole are now firmly on the agenda. Both law schools and law professionals have a role to play in developing acceptable standards in this area and it is therefore appropriate that the essays in this volume are written by a distinguished group of law teachers and practitioners together with senior members of the judiciary. -/- The book opens with an overview chapter, followed by three chapters analysing the ethical rules pertaining to the judiciary, the Bar, and solicitors, written by, respectively, the Master of the Rolls, Anthony Thornton, and Alison Crawley and Christopher Bramall. The following three chapters look at the specific issues of confidentiality (Michael Brindle and Guy Dehn) and the particular ethical problems in the family and criminal law jurisdictions (Sir Alan Ward and Professor Andrew Ashworth respectively). Chapter 8, by Sir Alan Paterson, discusses the teaching of legal ethics, whilst Chapters 9 and 10, by Marc Galanter, Thomas Palay, and Cyril Glasser put the subject in its wider social and professional context. The book finishes with a chapter which examines what lawyers may learn from looking at the study of medical ethics. (shrink)
In this paper we present the syntax and semantics of a temporal action language named Alan, which was designed to model interactive multimedia presentations where the Markov property does not always hold. In general, Alan allows the specification of systems where the future state of the world depends not only on the current state, but also on the past states of the world. To the best of our knowledge, Alan is the first action language (...) which incorporates causality with temporal formulas. In the process of defining the effect of actions we define the closure with respect to a path rather than to a state, and show that the non-Markovian model is an extension of the traditional Markovian model. Finally, we establish relationship between theories of Alan and logic programs. (shrink)
This is the first of two volumes of essays in commemoration of Alan Turing, whose pioneering work in the theory of artificial intelligence and computer science continues to be widely discussed today. A group of prominent academics from a wide range of disciplines focus on three questions famously raised by Turing: What, if any, are the limits on machine 'thinking'? Could a machine be genuinely intelligent? Might we ourselves be biological machines, whose thought consists essentially in nothing more than (...) the interaction of neurons according to strictly determined rules? The discussion of these fascinating issues is accessible to non-specialists and stimulating for all readers. -/- Also available in paperback is the companion volume: Connectionism, Concepts, and Folk Psychology, edited by Andy Clark and Peter Millican. While Volume 1 concentrates on Turing's main innovations in artificial intelligence, Volume 2 looks more broadly at his intellectual legacy in philosophy and cognitive science. (shrink)
Alan Gewirth's Reason and Morality directed philosophical attention to the possibility of presenting a rational and rigorous demonstration of fundamental moral principles. Now, these previously unpublished essays from some of the most distinguished philosophers of our generation subject Gewirth's program to thorough evaluation and assessment. In a tour de force of philosophical analysis, Professor Gewirth provides detailed replies to all of his critics--a major, genuinely clarifying essay of intrinsic philosophical interest.
Can torture be morally justified? I shall criticise arguments that have been adduced against torture and demonstrate that torture can be justified more easily than most philosophers dealing with the question are prepared to admit. It can be justified not only in ticking nuclear bomb cases but also in less spectacular ticking bomb cases and even in the socalled Dirty Harry cases. There is no morally relevant difference between self-defensive killing. of a culpable aggressor and torturing someone who is culpable (...) of a deadly threat that can be averted only by torturing him. Nevertheless, I shall argue that torture should not be institutionalised, for example by torture warrants. (shrink)
I imagine that people will complain that the account of normative concepts defended in Gibbard’s new book makes the metaethical waters even muddier because it blurs the line between cognitivism and noncognitivism and between realism and antirealism. However, these labels are philosophic tools, and in the wake of Gibbard’s new book, one might rightly conclude that there are new and better philosophical tools emerging on the metaethical scene. The uptake of views about practical reasoning—as exhibited by planning—into debates about the (...) (...) meaning of normative claims is a fruitful line of research. And, as Apt Feelings, Wise Choices made an initial bold step into novel and fruitful lines of research into the connection between psychology and morality, Thinking How to Live has also made an initial bold step into novel and fruitful lines of research into the connection between practical reasoning and normative semantics. (shrink)
One type of deflationism about metaphysical modality suggests that it can be analysed strictly in terms of linguistic or conceptual content and that there is nothing particularly metaphysical about modality. Scott Soames is explicitly opposed to this trend. However, a detailed study of Soames’s own account of modality reveals that it has striking similarities with the deflationary account. In this paper I will compare Soames’s account of a posteriori necessities concerning natural kinds with the deflationary one, specifically Alan Sidelle’s (...) account, and suggest that Soames’s account is vulnerable to the deflatonist’s critique. Furthermore, I conjecture that both the deflationary account and Soames’s account fail to fully explicate the metaphysical content of a posteriori necessities. Although I will focus on Soames, my argument may have more general implications towards the prospects of providing a meaning-based account of metaphysical modality. (shrink)
On the 27th of October, 1949, the Department of Philosophy at the University of Manchester organized a symposium "Mind and Machine", as Michael Polanyi noted in his Personal Knowledge (1974, p. 261). This event is known, especially among scholars of Alan Turing, but it is scarcely documented. Wolfe Mays (2000) reported about the debate, which he personally had attended, and paraphrased a mimeographed document that is preserved at the Manchester University archive. He forwarded a copy to Andrew Hodges and (...) B. Jack Copeland, who in then published it on their respective websites. The basis of this interpretation here is the copy preserved in the Regenstein Library of the University of Chicago, Special Collections, Polanyi Collection (abbreviated RPC, box 22, folder 19). The same collection holds the mimeographed statement that Polanyi prepared for this symposium: "Can the mind be represented by a machine?" This text has not been studied by Polanyi scholars. (shrink)
My theory of biocentric consequentialism is first shown not to be significantly inegalitarian, despite not advocating treating all creatures equally. I then respond to Carter's objections concerning population, species extinctions, the supposed minimax implication, endangered interests, autonomy and thought-experiments. Biocentric consequentialism is capable of supporting a sustainable human population at a level compatible with preserving most non-human species, as opposed to catastrophic population increases or catastrophic decimation. Nor is it undermined by the mere conceivable possibility of counter-intuitive implications. While Carter (...) shows that value-pluralism need not be riddled with contradictions, his version still introduces some, and faces further problems. Thus consequentialist theories may be needed to sift our values, at least if our values are commensurable. Carter's apparent suggestion that monistic theories such as biocentric consequentialism can never be harnessed to rich theories of value and must each myopically give undue prominence to a single value is questioned. (shrink)
Theories of personal identity purport to specify truth conditions for sentences of the form 'x-at-ti is the same person as y-at-tj. Most philosophers nowadays agree that such truth conditions are to be stated in terms of psychological continuity. However; opinions vary as to how the notion of psychological continuity is to be understood. In a recent contribution to this journal, Slors offers an account in which psychological continuity is spelled out in terms of narrative connectedness between mental states.The present paper (...) argues that Slors' theory either is no theory of personal identity at all or is too weak.Towards the end of the paper, it is indicated how the problem uncovered for Slors' theory may be avoided. (shrink)
We investigate Turing's contributions to computability theory for real numbers and real functions presented in [22, 24, 26]. In particular, it is shown how two fundamental approaches to computable analysis, the so-called ‘Type-2 Theory of Effectivity' (TTE) and the ‘realRAM machine' model, have their foundations in Turing's work, in spite of the two incompatible notions of computability they involve. It is also shown, by contrast, how the modern conceptual tools provided by these two paradigms allow a systematic interpretation of Turing's (...) pioneering work in the subject. (shrink)
It is not widely realised that Turing was probably the first person to consider building computing machines out of simple, neuron-like elements connected together into networks in a largely random manner. Turing called his networks unorganised machines. By the application of what he described as appropriate interference, mimicking education an unorganised machine can be trained to perform any task that a Turing machine can carry out, provided the number of neurons is sufficient. Turing proposed simulating both the behaviour of the (...) network and the training process by means of a computer program. We outline Turing's connectionist project of 1948. (shrink)