The theory and philosophy of artificialintelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of ArtificialIntelligence” that was held in October 2011 in Thessaloniki. ArtificialIntelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature (...) of computing, perception, reasoning, learning, language, action, interaction, consciousness, humankind, life etc. etc. – and at the same time it has contributed substantially to answering these questions. There is thus a substantial tradition of work, both on AI by philosophers and of theory within AI itself. - The volume contains papers by Bostrom, Dreyfus, Gomila, O'Regan and Shagrir. (shrink)
Report for "The Reasoner" on the conference "Philosophy and Theory of ArtificialIntelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org. --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI.
[Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificialintelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of (...) human intelligence. This consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
The peculiarity of the relationship between philosophy and ArtificialIntelligence (AI) has been evidenced since the advent of AI. This paper aims to put the basis of an extended and well founded philosophy of AI: it delineates a multi-layered general framework to which different contributions in the field may be traced back. The core point is to underline how in the same scenario both the role of philosophy on AI and role of AI on (...) class='Hi'>philosophy must be considered. Moreover, this framework is revised and extended in the light of the consideration of a type of multiagent system devoted to afford the issue of scientific discovery both from a conceptual and from a practical point of view. (shrink)
Systems Theory and Scientific Philosophy constitutes a totally new approach to philosophy, the philosophy of mind and the problems of artificialintelligence, and is based upon the pioneering work in cybernetics of W. Ross Ashby. While science is humanity's attempt to know how the world works and philosophy its attempt to know why, scientific philosophy is the application of scientific techniques to questions of philosophy.
In current philosophical research the term 'philosophy of social action' can be used - and has been used - in a broad sense to encompass the following central research topics: 1) action occurring in a social context; this includes multi-agent action; 2) joint attitudes (or "we-attitudes" such as joint intention, mutual belief) and other social attitudes needed for the explication and explanation of social action; 3) social macro-notions, such as actions performed by social groups and properties of social groups (...) such as their goals and beliefs; 4) social norms and social institutions (see Tuomela, 1984, 1995). The theory of social action understood analogously in a broad sense would then involve not only philosophical but all other relevant theorizing about social action. Thus, in this sense, such fields of ArtificialIntelligence (AI) as Distributed AI (DAI) and the theory of Multi-Agent Systems (MAS) fall within the scope of the theory of social action. DAI studies the social side of computer systems and includes various well-known areas ranging from Human Computer Interaction, Computer-Supported Cooperative Work, Organizational Processing, Distributed Problem Solving to Simulation of Social Systems and Organizations. Even if I am a philosopher with low artificialintelligence I will below try to say something about what the scope of DAI should be taken to be on conceptual and philosophical grounds. (In the later sections of the paper the central notion of joint intention will be the main topic - in order to illustrate how philosophers and DAI-researchers approach this issue.) Let us now consider the relationship between philosophy - especially philosophy of social action - and DAI. Both are concerned with social matters and in this sense seem to have a connection to social science proper. What kinds of questions should these areas of study be concerned with? In principle, ordinary social science should study all aspects of social life (in various societies and cultures), try to describe it and create general theories to explain it. (shrink)
In the Fall of 1983, I offered a junior/senior-level course in Philosophy of ArtificialIntelligence, in the Department of Philosophy at SUNY Fredonia, after returning there from a year’s leave to study and do research in computer science and artificialintelligence (AI) at SUNY Buffalo. Of the 30 students enrolled, most were computerscience majors, about a third had no computer background, and only a handful had studied any philosophy. (I might note that enrollments (...) have subsequently increased in the Philosophy Department’s AI-related courses, such as logic, philosophy of mind, and epistemology, and that several computer science students have added philosophy as a second major.) This article describes that course, provides material for use in such a course, and offers a bibliography of relevant articles in the AI, cognitive science, and philosophical literature. (shrink)
Buchanan and Darden have provided compelling reasons why philosophers of science concerned with the nature of scientific discovery should be aware of current work in artificialintelligence. This paper contends that artificialintelligence is even more than a source of useful analogies for the philosophy of discovery: the two fields are linked by interfield connections between philosophy of science and cognitive psychology and between cognitive psychology and artificialintelligence. Because the philosophy (...) of discovery must pay attention to the psychology of practicing scientists, and because current cognitive psychology adopts a computational view of mind with AI providing the richest models of how the mind works, the philosophy of discovery must also concern itself with AI models of mental operations. The relevance of the artificialintelligence notion of a frame to the philosophy of discovery is briefly discussed. (shrink)
[Müller, Vincent C. (ed.), (2016), Fundamental issues of artificialintelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificialintelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificialintelligence (...) raises or will raise. The key issues this volume investigates include the relation of AI and cognitive science, ethics of AI and robotics, brain emulation and simulation, hybrid systems and cyborgs, intelligence and intelligence testing, interactive systems, multi-agent systems, and superintelligence. Based on the 2nd conference on “Theory and Philosophy of ArtificialIntelligence” held in Oxford, the volume includes prominent researchers within the field from around the world. (shrink)
Recent philosophy of mind has increasingly focused on the role of technology in shaping, influencing, and extending our mental faculties. Technology extends the mind in two basic ways: through the creative design of artifacts and the purposive use of instruments. If the meaningful activity of technological artifacts were exhaustively described in these mind-dependent terms, then a philosophy of technology would depend entirely on our theory of mind. In this dissertation, I argue that a mind-dependent approach to technology is (...) mistaken. Instead, some machines are best understood as independent participants in their own right, contributing to and augmenting a variety of social practices as active, though often unrecognized, community members. Beginning with Turing’s call for “fair play for machines”, I trace an argument concerning the social autonomy of nonhuman agents through the artificialintelligence debates of the 20th century. I’ll argue that undue focus on the mind has obscured the force of Turing’s proposal, leaving the debates in an unfortunate stalemate. I will then examine a network theoretic alternative to the study of multi-agent complex systems that can avoid anthropocentric, mind-dependent ways of framing human-machine interactions. I argue that this approach allows for both scientific and philosophical treatment of large and complicated sociotechnical systems, and suggests novel methods for designing, managing, and maintaining such systems. Rethinking machines in mind-independent terms will illuminate the nature, scope, and evolution of our social and technological practices, and will help clarify the relationships between minds, machines, and the environments we share. (shrink)
A translation of the renowned French reference book, Vocabulaire de sciences cognitives , the Dictionary of Cognitive Science presents comprehensive definitions of more than 120 terms. The editor and advisory board of specialists have brought together 60 internationally recognized scholars to give the reader a comprehensive understanding of the most current and dynamic thinking in cognitive science. Topics range from Abduction to Writing, and each entry covers its subject from as many perspectives as possible within the domains of psychology, (...) class='Hi'>artificialintelligence, neuroscience, philosophy, and linguistics. This multidisciplinary work is an invaluable resource for all collections. (shrink)
This paper is a complement to the recent wealth of literature suggesting a strong philosophical relationship between artificial life (A-Life) and artificialintelligence (AI). I seek to point out where this analogy seems to break down, or where it would lead us to draw incorrect conclusions about the philosophical situation of A-Life. First, I sketch a thought experiment (based on the work of Tom Ray) that suggests how a certain subset of A-Life experiments should be evaluated. In (...) doing so, I suggest that treating A-Life experiments as if they were just AI experiments applied to a new domain may lead us to see problems (like Searle’s “Chinese room”) which do not exist. In the second half of the paper, I examine the reasons for suggesting that there is a philosophical relationship between the two fields. I characterize the strong thesis for a translation of AI concepts, metaphors, and arguments into A-Life as the “global replacement strategy.” Such a strategy is only fruitful inasmuch as there is a strong analogy between AI and A-Life. I conclude the paper with a discussion of two areas where such a strong analogy seems to break down. These areas relate to eliminative materialism and the lack of a “subjective” element in biology. I conclude that the burden of proof lies with the person who wishes to import a concept from another discipline into A-Life, even if that other discipline is AI. (shrink)
A philosophical appraisal of historical positions on the nature of thought, mentality, and intelligence, this survey begins with the views of Descartes, Turing, and Newell and Simon, but includes the work of Haugeland, Fodor, Searle, and other major scholars. The underlying issues concern distinctions between syntax, semantics, and pragmatics, where physical computers seem to be best viewed as mark-manipulating or syntax-processing mechanisms. Alternative accounts have been advanced of what it takes to be a thinking thing, including being Turing machines, (...) symbol systems, semantic engines, and semiotic systems, which have the ability to use signs in the sense of the Charles S. Peirce. Reflections regarding the nature of representations and existence of mental algorithms suggest that the theory of minds as semiotic systems should be preferred to its alternatives, where digital computers can still qualify as “intelligent machines” even without minds. (shrink)
This paper examines the hypothesis that analogies may play a role in the generation of new ideas that are built into new explanatory theories. Methods of theory construction by analogy, by failed analogy, and by modular components from several analogies are discussed. Two different analyses of analogy are contrasted: direct mapping (Mary Hesse) and shared abstraction (Michael Genesereth). The structure of Charles Darwin's theory of natural selection shows various analogical relations. Finally, an "abstraction for selection theories" is shown to be (...) the structure of a number of theories. (shrink)
If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – (...) and critically evaluating such proposals. (shrink)
The enduring progression of artificialintelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificialintelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to (...) their specification of preventing human harm, stipulating obedience to humans and incorporating robotic self-protection. However the overwhelming predominance in the study of this field has focussed on human–robot interactions without fully considering the ethical inevitability of future artificial intelligences communicating together and has not addressed the moral nature of robot–robot interactions. A new robotic law is proposed and termed AIonAI or artificialintelligence-on-artificialintelligence. This law tackles the overlooked area where future artificial intelligences will likely interact amongst themselves, potentially leading to exploitation. As such, they would benefit from adopting a universal law of rights to recognise inherent dignity and the inalienable rights of artificial intelligences. Such a consideration can help prevent exploitation and abuse of rational and sentient beings, but would also importantly reflect on our moral code of ethics and the humanity of our civilisation. (shrink)
The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
Harry Collins interprets Hubert Dreyfus’s philosophy of embodiment as a criticism of all possible forms of artificialintelligence. I argue that this characterization is inaccurate and predicated upon a misunderstanding of the relevance of phenomenology for empirical scientific research.