Alan Turing’s pioneering work on computability, and his ideas on morphological computing support Andrew Hodges’ view of Turing as a natural philosopher. Turing’s natural philosophy differs importantly from Galileo’s view that the book of nature is written in the language of mathematics (The Assayer, 1623). Computing is more than a language used to describe nature as computation produces real time physical behaviors. This article presents the framework of Natural info-computationalism as a contemporary natural philosophy that builds on the legacy (...) of Turing’s computationalism. The use of info-computational conceptualizations, models and tools makes possible for the first time in history modeling of complex self-organizing adaptive systems, including basic characteristics and functions of living systems, intelligence, and cognition. (shrink)
Alan Turing is known for both his mathematical creativity and genius and role in cryptography war efforts, and for his homosexuality, for which he was persecuted. Yet there is little work that brings these two parts of his life together. This paper deconstructs and moves beyond the extant stereotypes around perceived associations between gay men and creativity, to consider how Turing’s lived experience as a queer mathematician provides a rich seam of insight into the ways in which his life, (...) relationships, and working environment shaped his work. (shrink)
In this in ter view, the pres ti gious an thro - pol o gist, his to rian and T.V. anaouncer, Alan Macfarlane com ments on some of the is sues that have been ad dressed in his writ ings. His main the o ret i cal con cern has been to study the pe cu - liar con di tions that gave rise to the mod e..
This small book packs a considerable theoretical and practical punch. Alan Ware challenges much received wisdom about the dynamics of two party politics. In the process, he adds considerably to contemporary discussion of the intersection of structure and agency in the development and adaptation of political systems. Ware picks out two party systems for concentrated attention because of their relative tractability in his words: these systems are ideal for analysing the capacity of parties to pursue their interests in (...) the face of both other actors within the political system and also of elements within the party itself. (shrink)
A major voice in late twentieth-century philosophy, Alan Donagan is distinguished for his theories on the history of philosophy and the nature of morality. The Philosophical Papers of Alan Donagan, volumes 1 and 2, collect 28 of Donagan's most important and best-known essays on historical understanding and ethics from 1957 to 1991. Volume 2 addresses issues in the philosophy of action and moral theory. With papers on Kant, von Wright, Sellars, and Chisholm, this volume also covers a range (...) of questions in applied ethics--from the morality of Truman's decision to drop atomic bombs on Hiroshima and Nagasaki to ethical questions in medicine and law. (shrink)
Alan Gewirth's Reason and Morality , in which he set forth the Principle of Generic Consistency, is a major work of modern ethical theory that, though much debated and highly respected, has yet to gain full acceptance. Deryck Beyleveld contends that this resistance stems from misunderstanding of the method and logical operations of Gewirth's central argument. In this book Beyleveld seeks to remedy this deficiency. His rigorous reconstruction of Gewirth's argument gives its various parts their most compelling formulation and (...) clarifies its essential logical structure. Beyleveld then classifies all the criticisms that Gewirth's argument has received and measures them against his reconstruction of the argument. The overall result is an immensely rich picture of the argument, in which all of its complex issues and key moves are clearly displayed and its validity can finally be discerned. The comprehensiveness of Beyleveld's treatment provides ready access to the entire debate surrounding the foundational argument of Reason and Morality . It will be required reading for all who are interested in Gewirth's theory and deontological ethics and will be of central importance to moral and legal theorists. (shrink)
This paper concerns Alan Turing’s ideas about machines, mathematical methods of proof, and intelligence. By the late 1930s, Kurt Gödel and other logicians, including Turing himself, had shown that no finite set of rules could be used to generate all true mathematical statements. Yet according to Turing, there was no upper bound to the number of mathematical truths provable by intelligent human beings, for they could invent new rules and methods of proof. So, the output of a human mathematician, (...) for Turing, was not a computable sequence (i.e., one that could be generated by a Turing machine). Since computers only contained a finite number of instructions (or programs), one might argue, they could not reproduce human intelligence. Turing called this the “mathematical objection” to his view that machines can think. Logico-mathematical reasons, stemming from his own work, helped to convince Turing that it should be possible to reproduce human intelligence, and eventually compete with it, by developing the appropriate kind of digital computer. He felt it should be possible to program a computer so that it could learn or discover new rules, overcoming the limitations imposed by the incompleteness and undecidability results in the same way that human mathematicians presumably do. (shrink)
In a recent article in this journal, Alan Thomas presents a novel defence of what I call ‘Rawlsian Institutionalism about Justice’ against G. A. Cohen’s well-known critique. In this response I aim to defend Cohen’s rejection of Institutionalism against Thomas’s arguments. In part this defence requires clarifying precisely what is at issue between Institutionalists and their opponents. My primary focus, however, is on Thomas’s critical discussion of Cohen’s endorsement of an ethical prerogative, as well as his appeal to the (...) institutional framework of a ‘property-owning democracy’ in his elaboration of the precise institutional requirements of Rawlsian Institutionalist justice, and his related claim that Cohen’s rejection of Institutionalism involves an objectionable ‘double counting’ of the demands of justice. I argue that once we are clear about both the kind of justification that can be given for a prerogative within a plausible ethical theory, and about the key points of departure between Institutionalist views and their rivals, Cohen’s rejection of Institutionalism appears well-motivated, and Thomas’s claim that his view is guilty of double counting the demands of justice can be seen to be mistaken. (shrink)
In his article, ‘Gratuitous evil and divine providence’, Alan Rhoda claims to have produced an uncontroversial theological premise for the evidential argument from evil. I argue that his premise is by no means uncontroversial among theists, and I doubt that any premise can be found that is both uncontroversial and useful for the argument from evil.
Discussions of the evidential argument from evil generally pay little attention to how different models of divine providence constrain the theist's options for response. After describing four models of providence and general theistic strategies for engaging the evidential argument, I articulate and defend a definition of ‘gratuitous evil’ that renders the theological premise of the argument uncontroversial for theists. This forces theists to focus their fire on the evidential premise, enabling us to compare models of providence with respect to how (...) plausibly they can resist it. I then assess the four models, concluding that theists are better off vis-à-vis the evidential argument if they reject meticulous providence. (shrink)
D. Alan Shewmon has advanced a well-documented challenge to the widely accepted total brain death criterion for death of the human being. We show that Shewmon's argument against this criterion is unsound, though he does refute the standard argument for that criterion. We advance a distinct argument for the total brain death criterion and answer likely objections. Since human beings are rational animals – sentient organisms of a specific type – the loss of the radical capacity for sentience involves (...) a substantial change, the passing away of the human organism. In human beings total brain death involves the complete loss of the radical capacity for sentience, and so in human beings total brain death is death. (shrink)
Review Essay: Exemplary Stories: On the Uses of Biography in Recent Sociology: Alan Sica and Stephen Turner The Disobedient Generation: Social Theorists in the Sixties ; Mathieu Deflem Sociologists in a Global Age: Biographical Perspectives ; Anthony Elliott and Charles Lemert, The New Individualism: The Emotional Costs of Globalization.
It has been just over 100 years since the birth of Alan Turing and more than 65 years since he published in Mind his seminal paper, Computing Machinery and Intelligence. In the Mind paper, Turing asked a number of questions, including whether computers could ever be said to have the power of “thinking”. Turing also set up a number of criteria—including his imitation game—under which a human could judge whether a computer could be said to be “intelligent”. Turing’s paper, (...) as well as his important mathematical and computational insights of the 1930s and 1940s led to his popular acclaim as the “Father of Artificial Intelligence”. In the years since his paper was published, however, no computational system has fully satisfied Turing’s challenge. In this paper we focus on a different question, ignored in, but inspired by Turing’s work: How might the Artificial Intelligence practitioner implement “intelligence” on a computational device? Over the past 60 years, although the AI community has not produced a general-purpose computational intelligence, it has constructed a large number of important artifacts, as well as taken several philosophical stances able to shed light on the nature and implementation of intelligence. This paper contends that the construction of any human artifact includes an implicit epistemic stance. In AI this stance is found in commitments to particular knowledge representations and search strategies that lead to a product’s successes as well as its limitations. Finally, we suggest that computational and human intelligence are two different natural kinds, in the philosophical sense, and elaborate on this point in the conclusion. (shrink)
In his article, 'Gratuitous evil and divine providence', Alan Rhoda claims to have produced an uncontroversial theological premise for the evidential argument from evil. I argue that his premise is by no means uncontroversial among theists, and I doubt that any premise can be found that is both uncontroversial and useful for the argument from evil.
As is well known, Alan Turing drew a line, embodied in the "Turing test," between intellectual and physical abilities, and hence between cognitive and natural sciences. Less familiarly, he proposed that one way to produce a "passer" would be to educate a "child machine," equating the experimenter's improvements in the initial structure of the child machine with genetic mutations, while supposing that the experimenter might achieve improvements more expeditiously than natural selection. On the other hand, in his foundational "On (...) the chemical basis of morphogenesis," Turing insisted that biological explanation clearly confine itself to purely physical and chemical means, eschewing vitalist and teleological talk entirely and hewing to D'Arcy Thompson's line that "evolutionary 'explanations,'" are historical and narrative in character, employing the same intentional and teleological vocabulary we use in doing human history, and hence, while perhaps on occasion of heuristic value, are not part of biology as a natural science. To apply Turing's program to recent issues, the attempt to give foundations to the social and cognitive sciences in the "real science" of evolutionary biology (as opposed to Turing's biology) is neither to give foundations, nor to achieve the unification of the social/cognitive sciences and the natural sciences. (shrink)
The December 2008 White Paper (WP) on “Brain Death” published by the President’s Council on Bioethics (PCBE) reaffirmed its support for the traditional neurological criteria for human death. It spends considerable time explaining and critiquing what it takes to be the most challenging recent argument opposing the neurological criteria formulated by D. Alan Shewmon, a leading critic of the “whole brain death” standard. The purpose of this essay is to evaluate and critique the PCBE’s argument. The essay begins with (...) a brief background on the history of the neurological criteria in the United States and on the preparation of the 2008 WP. After introducing the WP’s contents, the essay sets forth Shewmon’s challenge to the traditional neurological criteria and the PCBE’s reply to Shewmon. The essay concludes by critiquing the WP’s novel justification for reaffirming the traditional conclusion, a justification the essay finds wanting. (shrink)
Alan Shewmons article, The brain and somatic integration: Insights into the standard biological rationale for equating brain death with death (2001), strikes at the heart of the standard justification for whole brain death criteria. The standard justification, which I call the standard paradigm, holds that the permanent loss of the functions of the entire brain marks the end of the integrative unity of the body. In my response to Shewmons article, I first offer a brief summary of the standard (...) paradigm and cite recent work by advocates of whole brain criteria who tenaciously cling to the standard paradigm despite increasing evidence showing that it has significant weaknesses. Second, I address Shewmons case against the standard paradigm, arguing that he is successful in showing that whole brain dead patients have integrated organic unity. Finally, I discuss some minor problems with Shewmons article, along with suggestions for further elaboration. (shrink)
In this paper we present the syntax and semantics of a temporal action language named Alan, which was designed to model interactive multimedia presentations where the Markov property does not always hold. In general, Alan allows the specification of systems where the future state of the world depends not only on the current state, but also on the past states of the world. To the best of our knowledge, Alan is the first action language which incorporates causality (...) with temporal formulas. In the process of defining the effect of actions we define the closure with respect to a path rather than to a state, and show that the non-Markovian model is an extension of the traditional Markovian model. Finally, we establish relationship between theories of Alan and logic programs. (shrink)
In this book Alan Haworth tends to sneer at libertarians. However, there are, I believe, a few sound criticisms. I have always held similar opinions of Murray Rothbard‟s and Friedrich Hayek‟s definitions of liberty and coercion, Robert Nozick‟s account of natural rights, and Hayek‟s spontaneous-order arguments. I urge believers of these positions to read Haworth. But I don‟t personally know many libertarians who believe them (or who regard Hayek as a libertarian).
C. J. Mews - Logic, Theology, and Poetry in Boethius, Abelard, and Alan of Lille: Words in the Absence of Things - Journal of the History of Philosophy 45:2 Journal of the History of Philosophy 45.2 327-328 Muse Search Journals This Journal Contents Reviewed by Constant J. Mews Monash University Eileen C. Sweeney. Logic, Theology, and Poetry in Boethius, Abelard, and Alan of Lille: Words in the Absence of Things. The New Middle Ages. London: Palgrave MacMillan, 2006. Pp. (...) xii + 248. Cloth, $65.00. Boethius, Abelard, and Alan of Lille all crystallized their thoughts in poetry as much as in prose. In seeking to condense their achievement into a slim monograph, Sweeney synthesizes much thought into relatively few pages. Her ambition pays off. Her opening chapter on Boethius.. (shrink)
The two books reviewed here are different efforts to embrace the vast subject called "social thought." The second edition of The Blackwell Dictionary of Modern Social Thought, edited by William Outhwaite with Alain Touraine, contains numerous updates; yet it also has some disadvantages compared to the first edition. Social Thought: From the Enlightenment to the Present, edited by Alan Sica, is a bold but controversial attempt at gathering in one anthology as many social thinkers as possible. Key Words: "social" (...) • social thought/theory • William Outhwaite • Alan Sica • explanation. (shrink)
Alan Carter's recent review in Mind of my Ethics of the Global Environment combines praise of biocentric consequentialism with criticisms that it could advocate both minimal satisfaction of human needs and the extinction of for the sake of generating extra people; Carter also maintains that as a monistic theory it is predictably inadequate to cover the full range of ethical issues, since only a pluralistic theory has this capacity. In this reply, I explain how the counter-intuitive implications of biocentric (...) consequentialism suggested by Carter are not implications, and argue that since pluralistic theories either generate contradictions or collapse into monistic theories, the superiority of pluralistic theories is far from predictable. Thus Carter's criticisms fail to undermine biocentric consequentialism as a normative theory applicable to the generality of ethical issues. (shrink)
Alan Weir’s new book is, like Darwin’s Origin of Species, ‘one long argument’. The author has devised a new kind of have-it-both-ways philosophy of mathematics, supposed to allow him to say out of one side of his mouth that the integer 1,000,000 exists and even that the cardinal ℵω exists, while saying out of the other side of his mouth that no numbers exist at all, and the whole book is devoted to an exposition and defense of this new (...) view. The view is presented in the book in a way that can make it difficult for the reader to trace the main line of argument: with a great deal of apparatus, and with a great many digressions into subordinate issues. In what follows I will try to stick to what I take to be the essentials, even at the risk of oversimplifying some central but complicated issues, and at the cost of neglecting some interesting but peripheral ones.In chapter 1, the author introduces a distinction between what he calls ‘two aspects of meaning’ and dubs informational content and metaphysical content. Informational content is the aspect of meaning of primary interest to linguists, and the one of which speakers themselves are generally aware, at least upon reflection. Metaphysical content is supposed to be another aspect of meaning primarily of interest to philosophers. The basic idea is that if there are standards of correctness for assertions of a certain kind, then such an assertion may be called ‘true’ when those standards are met, even though the kind of correctness involved is not correctness in representing how the world is. What the world must be like in order for the utterance to be true is the metaphysical content of the assertion, but it need not be part of its …. (shrink)
The origin of my article lies in the appearance of Copeland and Proudfoot's feature article in Scientific American, April 1999. This preposterous paper, as described on another page, suggested that Turing was the prophet of 'hypercomputation'. In their references, the authors listed Copeland's entry on 'The Church-Turing thesis' in the Stanford Encyclopedia. In the summer of 1999, I circulated an open letter criticising the Scientific American article. I included criticism of this Encyclopedia entry. This was forwarded to Prof. Ed Zalta, (...) editor of the Encyclopedia, and after some discussion he invited me to submit an entry on ' Alan Turing.'. (shrink)
Alan Musgrave has been one of the most important philosophers of science in the last quarter of the 20th century. He has exemplified an exceptional combination of clearheaded and profound philosophical thinking. Two seem to be the pillars of his thought: an uncompromising commitment to scientific realism and an equally uncompromising commitment to deductivism. The essays reprinted in this volume (which span a period of 25 years, from 1974 to 1999) testify to these two commitments. (There are two omissions (...) from this collection: “Realism, Truth and Objectivity” in Realism and Anti-realism in the PhilosophyofScience(1996,Kluwer)and“HowtoDowithoutInductiveLogic” (Science&Educationvol.8,1999.Iwillmakesomereferencestothesepapersin what follows.) In the present review, instead of giving an orderly summary of the 16 papers of Essays, I discuss Musgrave’s two major commitments and raise some worries about their combination. (shrink)
Economic approaches to both social evaluation and decision-making are typically Paretian or utilitarian in nature and so display commitments to both welfarism and consequentialism. The contrast between the economic approach and any rights-based social philosophy has spawned a large literature that may be divided into two branches. The first is concerned with the compatibility of rights and utilitarianism seen as independent moral forces. This branch of the literature may be characterized as an example of the broader debate between the teleological (...) and deontological approaches. The second is concerned with the possibility that substantial rights may be grounded in utilitarianism with the moral force of rights being derived from more basic commitments to welfarism and consequentialism. This branch of the literature may be characterized as an exploration of the flexibility of the teleological approach, and, in particular, its ability to give rise to views more normally associated with the deontological approach. This essay is concerned with the second branch of the literature. (shrink)
In his short life, Alan Turing (1912-1954) made foundational contributions to philosophy, mathematics, biology, artificial intelligence, and computer science. He, as much as anyone, invented and showed how to program the digital electronic computer. From September, 1939, his work on computation was war-driven and brutally practical. He developed high speed computing devices needed to decipher German Enigma Machine messages to and from U-boats, countering the most serious threat by far to Britain..
John Stuart Mill is—surprisingly—a difficult writer. He writes clearly, non-technically, and in a very plain prose which Bertrand Russell once described as a model for philosophers. It is never hard to see what the general drift of the argument is, and never hard to see which side he is on. He is, none the less, a difficult writer because his clarity hides complicated arguments and assumptions which often take a good deal of unpicking. And when we have done that unpicking, (...) the task of analysing the merits and deficiencies of the arguments is still only half completed. This is true of all his work and particularly true of Liberty. It is an essay whose clarity and energy have made it the most popular of all Mill's work. Yet it conceals philosophical, sociological and historical assumptions of a very debatable kind. In his introduction, Mill says the object of this essay is to defend one very simple principle, as entitled to govern absolutely the dealings of society with the individual in the way of compulsion and control, whether the means used be legal penalties, or the moral coercion of public opinion. (shrink)
Jonathan Quong | : Alan Patten presents his account of minority rights as broadly continuous with Ronald Dworkin’s theory of equality of resources. This paper challenges this claim. I argue that, contra Patten, Dworkin’s theory does not provide a basis to offer accommodations or minority rights, as a matter of justice, to some citizens who find themselves at a relative disadvantage in pursuing their plans of life after voluntarily changing their cultural or religious commitments. | : Alan Patten (...) considère que sa théorie des droits des minorités s’inscrit en continuité avec celle de l’égalité des ressources chez Donald Dworkin. Cet article interroge cette affirmation. Je soutiens que, contrairement à ce que pense Patten, la théorie de Dworkin ne fournit pas de base en vue d’accommodations ou des droits de la minorité, en ce qui a trait à la justice, à des citoyens relativement désavantagés par la poursuite de leur plan de vie après avoir volontairement changé de culture ou d’engagements religieux. (shrink)
As a preliminary to the justification of equal opportunity, we require a few words on the concept. An opportunity is a chance to attain some goal or obtain some benefit. More precisely, it is the lack of some obstacle or obstacles to the attainment of some goal or benefit. Opportunities are equal in some specified or understood sense when persons face roughly the same obstacles or obstacles of roughly the same difficulty of some specified or understood sort. In different contexts (...) we might have different sorts of benefits or obstacles in mind. But in the current social context, and in the context of this discussion, we refer to educational and occupational opportunities, chances to attain the benefits of higher education and of socially and economically desirable positions, benefits assumed to be desired by many or most individuals, other things being equal. And we generally divide obstacles into two broad classes: those imposed by the social system or by other persons in the society, for example, the hardships of life in the lower economic classes or barriers from prejudices based on race, sex, or ethnic background; and those imposed by natural disabilities, for example, low intelligence or lack of talents. The initial question is whether a moral society is obligated to create equality in opportunities in the senses just defined. I shall assume here initially that there is some such obligation on the part of society or the state, although I shall specify its nature and limits more precisely below. With the exception of certain libertarians, almost everyone, liberal and conservative alike, agrees in this assumption. (shrink)
Alan Gewirth's Reason and Morality directed philosophical attention to the possibility of presenting a rational and rigorous demonstration of fundamental moral principles. Now, these previously unpublished essays from some of the most distinguished philosophers of our generation subject Gewirth's program to thorough evaluation and assessment. In a tour de force of philosophical analysis, Professor Gewirth provides detailed replies to all of his critics--a major, genuinely clarifying essay of intrinsic philosophical interest.
The Early Han enjoyed some prosperity while it struggled with centralization and political control of the kingdom. The Later Han was plagued by the court intrigue, corrupt eunuchs, and massive flooding of the Yellow River that eventually culminated in popular uprisings that led to the demise of the dynasty. The period that followed was a renewed warring states period that likewise stimulated a rebirth of philosophical and religious debate, growth, and innovations. Alan K. L. Chan and Yuet-Keung Lo's Philosophy (...) and Religion in Early Medieval China is a welcome addition to the growing body of literature on medieval China. It is a companion volume to their coauthored work, Interpretation and Literature in Early .. (shrink)
This paper is a small contribution to two large subjects. The first large subject is that of exploitation—what it is for somebody to be exploited, in what ways people can be and are exploited, whether exploitation necessarily involves coercion, what Marx's understanding of exploitation was and whether it was adequate: all these are issues on which I merely touch, at best. My particular concern here is to answer the two questions, whether Marx thought capitalist exploitation unjust and how the answer (...) to that question illuminates Marx's conception of morality in general. The second large subject is that of the nature of morality—whether there are specifically moral values and specifically moral forms of evaluation and criticism, how these relate to our explanatory interests in the same phenomena, what it would be like to abandon the ‘moral point of view’, whether the growth of a scientific understanding of society and ourselves inevitably undermines our confidence in the existence of moral ‘truths’. These again are issues on which I only touch if I mention them at all, but the questions I try to answer are, what does Marx propose to put in the place of moral judgment, and what kind of assessment of the horrors of capitalism does he provide if not a moral assessment? (shrink)
Alan Gewirth has propounded a moral theory which commits him to the view that prescriptions can appropriately be addressed to people who have neither any moral reasons nor any prudential reasons to follow the prescriptions. We highlight the strangeness of Gewirth's position and then show that it undermines his attempt to come up with a supreme moral principle.
I live just off of Bell Road outside of Newburgh, Indiana, a small town of 3,000 people. A mile down the street Bell Road intersects with Telephone Road not as a modern reminder of a technology belonging to bygone days, but as testimony that this technology, now more than a century and a quarter old, is still with us. In an age that prides itself on its digital devices and in which the computer now equals the telephone as a medium (...) of communication, it is easy to forget the debt we owe to an era that industrialized the flow of information, that the light bulb, to pick a singular example, which is useful for upgrading visual information we might otherwise overlook, nonetheless remains the most prevalent of all modern day information technologies. Edison’s light bulb, of course, belongs to a different order of informational devices than the computer, but not so the telephone, not entirely anyway. Alan Turing, best known for his work on the Theory of Computation (1937), the Turing Machine (also 1937) and the Turing Test (1950), is often credited with being the father of computer science and the father of artificial intelligence. Less well-known to the casual reader but equally important is his work in computer engineering. The following lecture on the Automatic Computing Engine, or ACE, shows Turing in this different light, as a mechanist concerned with getting the greatest computational power from minimal hardware resources. Yet Turing’s work on mechanisms is often eclipsed by his thoughts on computability and his other theoretical interests. This is unfortunate for several reasons, one being that it obscures our picture of the historical trajectory of information technology, a second that it emphasizes a false dichotomy between “hardware” and “software” to which Turing himself did not ascribe but which has, nonetheless, confused researchers who study the nature of mind and intelligence for generations.. (shrink)
There are at least three tolerably distinct views about the connections between liberty and property; two of these I shall discuss fairly briefly in order to get on to Mill's central claims about the relationship between property rights and freedom, but in conclusion I shall return to them to show how they bear on what Mill has to say.