This is the first of two volumes of essays on the intellectual legacy of Alan Turing, whose pioneering work in artificial intelligence and computer science made him one of the seminal thinkers of the century. A distinguished international cast of contributors focus on the three famous ideas associated with his name: the Turing test, the Turing machine, and the Church-Turing thesis. 'a fascinating series of essays on computation by contributors in many fields' Choice.
In this in ter view, the pres ti gious an thro - pol o gist, his to rian and T.V. anaouncer, Alan Macfarlane com ments on some of the is sues that have been ad dressed in his writ ings. His main the o ret i cal con cern has been to study the pe cu - liar con di tions that gave rise to the mod e..
This paper concerns Alan Turing’s ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Gödel and other logicians, including Turing himself, had shown that no finite set of rules could be used to generate all true mathematical statements. Yet according to Turing, there was no upper bound to the number of mathematical truths provable by intelligent human beings, for they could invent new rules and methods of proof. So, the output of a human mathematician, (...) for Turing, was not a computable sequence (i.e., one that could be generated by a Turing machine). Since computers only contained a finite number of instructions (or programs), one might argue, they could not reproduce human intelligence. Turing called this the “mathematical objection” to his view that machines can think. Logico-mathematical reasons, stemming from his own work, helped to convince Turing that it should be possible to reproduce human intelligence, and eventually compete with it, by developing the appropriate kind of digital computer. He felt it should be possible to program a computer so that it could learn or discover new rules, overcoming the limitations imposed by the incompleteness and undecidability results in the same way that human mathematicians presumably do. (shrink)
In a recent article in this journal, Alan Thomas presents a novel defence of what I call ‘Rawlsian Institutionalism about Justice’ against G. A. Cohen’s well-known critique. In this response I aim to defend Cohen’s rejection of Institutionalism against Thomas’s arguments. In part this defence requires clarifying precisely what is at issue between Institutionalists and their opponents. My primary focus, however, is on Thomas’s critical discussion of Cohen’s endorsement of an ethical prerogative, as well as his appeal to the (...) institutional framework of a ‘property-owning democracy’ in his elaboration of the precise institutional requirements of Rawlsian Institutionalist justice, and his related claim that Cohen’s rejection of Institutionalism involves an objectionable ‘double counting’ of the demands of justice. I argue that once we are clear about both the kind of justification that can be given for a prerogative within a plausible ethical theory, and about the key points of departure between Institutionalist views and their rivals, Cohen’s rejection of Institutionalism appears well-motivated, and Thomas’s claim that his view is guilty of double counting the demands of justice can be seen to be mistaken. (shrink)
In his article, ‘Gratuitous evil and divine providence’, Alan Rhoda claims to have produced an uncontroversial theological premise for the evidential argument from evil. I argue that his premise is by no means uncontroversial among theists, and I doubt that any premise can be found that is both uncontroversial and useful for the argument from evil.
In his article, 'Gratuitous evil and divine providence', Alan Rhoda claims to have produced an uncontroversial theological premise for the evidential argument from evil. I argue that his premise is by no means uncontroversial among theists, and I doubt that any premise can be found that is both uncontroversial and useful for the argument from evil.
Alan Shewmons article, The brain and somatic integration: Insights into the standard biological rationale for equating brain death with death (2001), strikes at the heart of the standard justification for whole brain death criteria. The standard justification, which I call the standard paradigm, holds that the permanent loss of the functions of the entire brain marks the end of the integrative unity of the body. In my response to Shewmons article, I first offer a brief summary of the standard (...) paradigm and cite recent work by advocates of whole brain criteria who tenaciously cling to the standard paradigm despite increasing evidence showing that it has significant weaknesses. Second, I address Shewmons case against the standard paradigm, arguing that he is successful in showing that whole brain dead patients have integrated organic unity. Finally, I discuss some minor problems with Shewmons article, along with suggestions for further elaboration. (shrink)
In this book Alan Haworth tends to sneer at libertarians. However, there are, I believe, a few sound criticisms. I have always held similar opinions of Murray Rothbard‟s and Friedrich Hayek‟s definitions of liberty and coercion, Robert Nozick‟s account of natural rights, and Hayek‟s spontaneous-order arguments. I urge believers of these positions to read Haworth. But I don‟t personally know many libertarians who believe them (or who regard Hayek as a libertarian).
Alan Carter's recent review in Mind of my Ethics of the Global Environment combines praise of biocentric consequentialism with criticisms that it could advocate both minimal satisfaction of human needs and the extinction of for the sake of generating extra people; Carter also maintains that as a monistic theory it is predictably inadequate to cover the full range of ethical issues, since only a pluralistic theory has this capacity. In this reply, I explain how the counter-intuitive implications of biocentric (...) consequentialism suggested by Carter are not implications, and argue that since pluralistic theories either generate contradictions or collapse into monistic theories, the superiority of pluralistic theories is far from predictable. Thus Carter's criticisms fail to undermine biocentric consequentialism as a normative theory applicable to the generality of ethical issues. (shrink)
Alan Weir’s new book is, like Darwin’s Origin of Species, ‘one long argument’. The author has devised a new kind of have-it-both-ways philosophy of mathematics, supposed to allow him to say out of one side of his mouth that the integer 1,000,000 exists and even that the cardinal ℵω exists, while saying out of the other side of his mouth that no numbers exist at all, and the whole book is devoted to an exposition and defense of this new (...) view. The view is presented in the book in a way that can make it difficult for the reader to trace the main line of argument: with a great deal of apparatus, and with a great many digressions into subordinate issues. In what follows I will try to stick to what I take to be the essentials, even at the risk of oversimplifying some central but complicated issues, and at the cost of neglecting some interesting but peripheral ones.In chapter 1, the author introduces a distinction between what he calls ‘two aspects of meaning’ and dubs informational content and metaphysical content. Informational content is the aspect of meaning of primary interest to linguists, and the one of which speakers themselves are generally aware, at least upon reflection. Metaphysical content is supposed to be another aspect of meaning primarily of interest to philosophers. The basic idea is that if there are standards of correctness for assertions of a certain kind, then such an assertion may be called ‘true’ when those standards are met, even though the kind of correctness involved is not correctness in representing how the world is. What the world must be like in order for the utterance to be true is the metaphysical content of the assertion, but it need not be part of its …. (shrink)
The origin of my article lies in the appearance of Copeland and Proudfoot's feature article in Scientific American, April 1999. This preposterous paper, as described on another page, suggested that Turing was the prophet of 'hypercomputation'. In their references, the authors listed Copeland's entry on 'The Church-Turing thesis' in the Stanford Encyclopedia. In the summer of 1999, I circulated an open letter criticising the Scientific American article. I included criticism of this Encyclopedia entry. This was forwarded to Prof. Ed Zalta, (...) editor of the Encyclopedia, and after some discussion he invited me to submit an entry on ' Alan Turing.'. (shrink)
Alan Musgrave has been one of the most important philosophers of science in the last quarter of the 20th century. He has exemplified an exceptional combination of clearheaded and profound philosophical thinking. Two seem to be the pillars of his thought: an uncompromising commitment to scientific realism and an equally uncompromising commitment to deductivism. The essays reprinted in this volume (which span a period of 25 years, from 1974 to 1999) testify to these two commitments. (There are two omissions (...) from this collection: “Realism, Truth and Objectivity” in Realism and Anti-realism in the PhilosophyofScience(1996,Kluwer)and“HowtoDowithoutInductiveLogic” (Science&Educationvol.8,1999.Iwillmakesomereferencestothesepapersin what follows.) In the present review, instead of giving an orderly summary of the 16 papers of Essays, I discuss Musgrave’s two major commitments and raise some worries about their combination. (shrink)
Jonathan Quong | : Alan Patten presents his account of minority rights as broadly continuous with Ronald Dworkin’s theory of equality of resources. This paper challenges this claim. I argue that, contra Patten, Dworkin’s theory does not provide a basis to offer accommodations or minority rights, as a matter of justice, to some citizens who find themselves at a relative disadvantage in pursuing their plans of life after voluntarily changing their cultural or religious commitments. | : Alan Patten (...) considère que sa théorie des droits des minorités s’inscrit en continuité avec celle de l’égalité des ressources chez Donald Dworkin. Cet article interroge cette affirmation. Je soutiens que, contrairement à ce que pense Patten, la théorie de Dworkin ne fournit pas de base en vue d’accommodations ou des droits de la minorité, en ce qui a trait à la justice, à des citoyens relativement désavantagés par la poursuite de leur plan de vie après avoir volontairement changé de culture ou d’engagements religieux. (shrink)
I live just off of Bell Road outside of Newburgh, Indiana, a small town of 3,000 people. A mile down the street Bell Road intersects with Telephone Road not as a modern reminder of a technology belonging to bygone days, but as testimony that this technology, now more than a century and a quarter old, is still with us. In an age that prides itself on its digital devices and in which the computer now equals the telephone as a medium (...) of communication, it is easy to forget the debt we owe to an era that industrialized the flow of information, that the light bulb, to pick a singular example, which is useful for upgrading visual information we might otherwise overlook, nonetheless remains the most prevalent of all modern day information technologies. Edison’s light bulb, of course, belongs to a different order of informational devices than the computer, but not so the telephone, not entirely anyway. Alan Turing, best known for his work on the Theory of Computation (1937), the Turing Machine (also 1937) and the Turing Test (1950), is often credited with being the father of computer science and the father of artificial intelligence. Less well-known to the casual reader but equally important is his work in computer engineering. The following lecture on the Automatic Computing Engine, or ACE, shows Turing in this different light, as a mechanist concerned with getting the greatest computational power from minimal hardware resources. Yet Turing’s work on mechanisms is often eclipsed by his thoughts on computability and his other theoretical interests. This is unfortunate for several reasons, one being that it obscures our picture of the historical trajectory of information technology, a second that it emphasizes a false dichotomy between “hardware” and “software” to which Turing himself did not ascribe but which has, nonetheless, confused researchers who study the nature of mind and intelligence for generations.. (shrink)
This small book packs a considerable theoretical and practical punch. Alan Ware challenges much received wisdom about the dynamics of two party politics. In the process, he adds considerably to contemporary discussion of the intersection of structure and agency in the development and adaptation of political systems. Ware picks out two party systems for concentrated attention because of their relative tractability in his words: these systems are ideal for analysing the capacity of parties to pursue their interests in (...) the face of both other actors within the political system and also of elements within the party itself. (shrink)
Explaining his now famous parody in Social Text's "Science Wars" issue, Alan Sokal writes in Dissent : But why did I do it? I confess that I'm an unabashed Old Leftist who never quite understood how deconstruction was supposed to help the working class. And I'm a stodgy old scientist who believes, naively, that there exists an external world, that there exist objective truths about that world, and that my job is to discover some of them. There is much (...) to note in this "confession." Why choose a hoax on Social Text to make these points? Did Sokal believe its editors were unabashed deconstructionists who doubted the existence of an external world or that they were anti-science? If so, he has either misread the burden of its seventeen-year history or was capricious in his choice. If not, then he has perpetuated the saddest hoax of all: on himself. For the fact is that Social Text, of which I am a founder and in whose editorial collective I served until this year, has never been in the deconstructionist camp; nor do its editors or the preponderance of its contributors doubt the existence of a material world. What is at issue is whether our knowledge of it can possibly be free of social and cultural presuppositions. (shrink)
Alan Carter's recent review in Mind of my Ethics of the Global Environment combines praise of biocentric consequentialism with criticisms that it could advocate both minimal satisfaction of human needs and the extinction of ‘inessential species’ for the sake of generating extra people; Carter also maintains that as a monistic theory it is predictably inadequate to cover the full range of ethical issues, since only a pluralistic theory has this capacity. In this reply, I explain how the counter-intuitive implications (...) of biocentric consequentialism suggested by Carter are not implications, and argue that since pluralistic theories either generate contradictions or collapse into monistic theories, the superiority of pluralistic theories is far from predictable. Thus Carter's criticisms fail to undermine biocentric consequentialism as a normative theory applicable to the generality of ethical issues. (shrink)
This article explores Alan Gewirth’s argument for a secular foundation for the idea 2 of human rights as a possible response to Michael J. Perry’s claim “that the idea of 3 human rights is…ineliminably religious.” I examine Gewirth’s reasoning for constructing 3 a theory, namely that existing theories are fundamentally flawed and leave the idea of human rights without a logically consistent foundation, before considering in detail his claims for the Principle of Generic Consistency . Having looked at his (...) critique of numerous other theories, as well as at his own argument about human action grounding basic rights to freedom and well-being, I then offer a critique of Gewirth’s PGC. Ultimately my conclusion is that Gewrith's 3 theory relies too heavily on the notions, first that we have a meta-desire not to contradict ourselves and, second, that we are unable to find persuasive justifications for our behavior that might allow us to avoid self-contradiction. If one is not troubled by charges of self-contradiction or, as is more often the case, one does not recognize that one’s victim is as much a human being as oneself, Gewirth’s theory 5 V 3 will not seem particularly persuasive. (shrink)
Alan Gibson has invited me to discuss publicly some of the issues I raised in my referee's report for History of Political Thought on his article ‘Ancients, Moderns and Americans: The Republicanism- Liberalism Debate Revisited’. I gladly accepted, not least because a dialogue on views of the American Founding is fitting both for the subject and Gibson's instructive characterization of it. The Federalist opens by identifying candid debate as a fundamental achievement for young America, giving hope that reflection and (...) choice, not just accident and force, can govern humans. Similarly, Gibson's argument that America's formative political thought comprises ‘multiple traditions’ — including Lockean liberalism, the republicanism of English opposition writings, and other influences — seems to define the Founding as not a monologue but a lively dialogue. I agree that complexity and not a single, pure strain of thought defines America's Founding, but this is not to say there are no fundamental principles for which the Founding stands. In the spirit of candid exchange, I will not discuss the many points in Dr Gibson's article with which I agree but those where one might interpret the American Founding differently or find an important dimension overlooked. (shrink)
Przedmowa Problematyka związana z zależnościami przyczynowymi, ich modelowaniem i odkrywa¬niem, po długiej nieobecności w filozofii i metodologii nauk, budzi współcześnie duże zainteresowanie. Wiąże się to przede wszystkim z dynamicznym rozwojem, zwłaszcza od lat 1990., technik obli¬czeniowych. Wypracowane w tym czasie sieci bayesowskie uznaje się za matematyczny język przyczynowości. Pozwalają one na daleko idącą auto¬matyzację wnioskowań, co jest także zachętą do podjęcia prób algorytmiza¬cji odkrywania przyczyn. Na potrzeby badań naukowych, które pozwalają na przeprowadzenie eksperymentu z randomizacją, standardowe metody ustalania zależności przyczynowych (...) opracowano na początku XX wieku. Zupełnie inaczej sprawa przedstawia się w przypadku badań nieeksperymentalnych, gdzie podobne rozwiązania pozostają kwestią przyszłości. Zadaniem tej książki jest podanie warunków, które powinny być spełnione przez te rozwiązania, oraz sformułowanie proceduralnego kryterium zależności przy¬czynowych jako szczegółowej realizacji tych warunków. Pociąga ono waż¬kie konsekwencje dla filozofii i metodologii nauk, które ujawnia – podany w Części II – zarys me-todolo¬gii proceduralnej. W literaturze przedmiotu brakuje w miarę wszechstronnego i systema¬tycznego omówie¬nia najnowszych filozoficznych i metodologicznych dys¬kusji na temat przy¬czynowości, co niech będzie wytłumaczeniem, dlaczego w niektórych punktach obecnej książki szczegółowo referuję trudno dos¬tępne teksty źró¬dłowe. Przymiotnik „proceduralny” używam tu w znaczeniu węższym niż Huw Price (w którego pracach właściwszy byłby termin „kryterialny”) dla podkre¬ślenia – zgodnie z łacińskim źródłosłowem procedo – że dla ustalenia przy¬czyny niezbędne jest podjęcie przez uczonych określonych interakcji z ba¬daną rzeczywistością. Zalążki zamysłu prezentowanego w tej książce przedstawiłem podczas warsztatów filozoficznych „Philosophy and Probability” w roku 2002, zor¬ganizowanych przez Instytut Filozofii Uniwersytetu w Konstancji. Wdzięczny jestem uczestnikom tych warsztatów za uwagi, a przede wszyst¬kim następującym osobom: Luc Bovens, Brandon Fitelson, Alan Hájek, Stephan Hartmann oraz Jon Williamson. Podczas międzynarodowej konferencji „Analytical Pragmatism”, zorgani¬zowanej w Lublinie w roku 2003 przez Wydział Filo¬zofii Katolickiego Uniwersytetu Lubelskiego, odniosłem swoją koncepcję do prac Nancy Cartwright. Szczególnie inspirujący okazał się komentarz Huw Price’a do mojego referatu i przeprowadzona z nim dysku¬sja. Ujęcie koncepcji metodologii proceduralnej na szerszym tle współ-czes¬nego nurtu empirystycznego w filozofii nauki przedstawiłem w roku 2004 podczas konferencji „5th Quadrennial Fellows Conference”, zorgani¬zowanej przez Instytut Filozofii Uniwersytetu Jagiellońskiego oraz Centrum Filozofii Nauki w Pittsburghu. Szczególnie pomocne w dalszych moich pra¬cach były uwagi Jamesa Bogena, Janet Kourany, Jamesa Lennoxa, Johna Nortona, Thomasa Bonka, Jana Woleńskiego i Johna Worralla, za które wyra¬żam swoją wdzięczność. Korpus książki powstał podczas mojego stażu w Centrum Filozofii Nauki w Pittsburghu, który odbyłem jako stypendysta Fundacji na Rzecz Nauki Polskiej w roku akademickim 2004-2005. Uczestniczyłem w tym czasie w życiu naukowym Centrum i w pracach badawczych zespołu z Instytutu Filozofii Uniwersytetu Carnegie-Mellon w Pittsburghu kierowanego przez Clarka Glymoura. Na jego ręce składam podziękowanie za wiele po¬mocnych uwag do moich wy¬stąpień oraz tekstów i za dyskusje przede wszystkim z nim samym i z jego najbliższymi współpracownikami: Peterem Spirtesem oraz Richardem Scheinesem, a także pozostałymi członkami tego zes¬połu, doktorantami i uczestnikami seminarium badawczego „Causality in the Social Sciences”. Za wieloletnie wsparcie, wielopłaszczyznowe inspiracje towarzyszące pi¬saniu tej książki, a także liczne pomocne uwagi do jej wcześniejszych wersji dziękuję przede wszystkim Księdzu Profesorowi Andrzejowi Bronkowi oraz Księdzu Profesorowi Józefowi Herbutowi, współprowadzącemu seminarium doktorskie w Katedrze Metodologii Nauk Katolickiego Uniwersytetu Lubel¬skiego im. Jana Pawła II, jak również pozostałym uczestnikom tego semina¬rium. Dziękuję mojej Żonie, dr Annie Kawalec za wiele wysiłku włożonego w ulepszenie redakcji – językowej i merytorycznej – obecnej książki. Książkę tę można czytać na kilka sposobów. Czytelnikom zainteresowa¬nym przede wszystkim prowadzeniem badań empirycznych polecałbym rozpoczęcie od Rozdziału 2. i kontynuację pozostałych rozdziałów Części I, a następnie Dodatków. Czytelnikom zainteresowanym problemami filozo¬fii i metodologii nauk polecałbym rozpoczęcie lektury książki od Części II i uzupełniającą lekturę Rozdziału 2., a następnie Wprowadzenia i Zakończenia. Czytelnikom mniej zainteresowanym zagadnieniami teoretycznymi pole¬całbym zapoznanie się z fascynującymi dziejami odkrycia przyczyn cholery przez Johna Snowa, które rekonstruuję w Rozdziale 1. W dalszej części nato¬miast polecałbym przejście do Wprowadzenia i Zakończenia, które w mniej specjalistyczny sposób przybliżają proponowane tu rozstrzygnięcia. Tekst książki nie był dotąd publikowany. Wyjątkiem są pewne fragmenty Rozdziału 8. oraz 9., które w zmienionej postaci ukazały się w Rocznikach Filozoficznych (Kawalec 2004). -/- Lublin, luty 2006 r. (shrink)
Human rights are often taken for granted, but "What is the basis of human rights?" This is no easy answer, De Xiao Weiqi, in his 2004 book of this difficult the problem. He considered the following four main theories: First, the external theory: the root cause of human rights outside the law, such as human rights divine theory; Second, the intrinsic theory: the root cause of human rights within the law - law positivism ; three, rationalist approaches: human rights is (...) based on rational intuition ; Fourth, experience-based approaches: human rights from human experience, especially experience the experience of injustice. He made the first three theories are severely criticized, and to promote a fourth to experience-based approaches. This paper points out that although the experience of Germany and Xiao Weiqi theory have many advantages, but it is evidence of human rights legislation are vague and inconsistent standards of the Department, and its invention on human rights can not provide more solid foundation. In fact, these issues are perceived more or less de Xiao Weiqi, he repeatedly pointed out that the need for external to the human rights legal standards, I think the divine rights theory is to provide a basis for human rights may be one of the world. De Xiao Weiqi theory of divine criticism of human rights have a good reminder, but I think it still can not rule out the Christ-centered Christian contemporary human rights theory. Human rights are often taken for granted but in fact the origin or foundation of human rights is not an easy question. Alan Dershowitz tries to tackle this thorny issue in his 2004 book: Rights from Wrongs: A Secular Theory of the Origins of Rights. He mainly considers four theories: 1) externalism, eg, the divine source of human rights theory; 2) Internalism, eg, legal positivism; 3) Rationalism, eg, the claim that human rights are founded upon rational intuitions; 4) Experiential approach, ie, human rights are derived from our historical experience of serious wrongs or injustice. He severely criticizes the first three theories, and defends the fourth. In this article, I will point out that despite quite a few merits of Dershowitz's theory; his criteria for the validation of human rights are vague or even inconsistent. His general stance of constructivism towards moral and human rights claims, moreover, are unable to provide solid foundations for human rights. In fact Dershowitz is to some extent aware of these problems, and he feels that human rights do need an objective external source, if only it is possible. I think the theistic worldview is indeed able to provide this objective external source. Dershowitz has raised a lot of good questions for the divine source theory but I argue that in the end they cannot exclude the possibility of a contemporary Christocentric theory of human rights. (shrink)
1 — 50 / 1000
Using PhilPapers from home?
Create an account to enable off-campus access through your institution's proxy server.
Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it: