The peculiarity of the relationship between philosophy and ArtificialIntelligence (AI) has been evidenced since the advent of AI. This paper aims to put the basis of an extended and well founded philosophy of AI: it delineates a multi-layered general framework to which different contributions in the field may be traced back. The core point is to underline how in the same scenario both the role of philosophy on AI and role of AI on (...) class='Hi'>philosophy must be considered. Moreover, this framework is revised and extended in the light of the consideration of a type of multiagent system devoted to afford the issue of scientific discovery both from a conceptual and from a practical point of view. (shrink)
In current philosophical research the term 'philosophy of social action' can be used - and has been used - in a broad sense to encompass the following central research topics: 1) action occurring in a social context; this includes multi-agent action; 2) joint attitudes (or "we-attitudes" such as joint intention, mutual belief) and other social attitudes needed for the explication and explanation of social action; 3) social macro-notions, such as actions performed by social groups and properties of social groups (...) such as their goals and beliefs; 4) social norms and social institutions (see Tuomela, 1984, 1995). The theory of social action understood analogously in a broad sense would then involve not only philosophical but all other relevant theorizing about social action. Thus, in this sense, such fields of ArtificialIntelligence (AI) as Distributed AI (DAI) and the theory of Multi-Agent Systems (MAS) fall within the scope of the theory of social action. DAI studies the social side of computer systems and includes various well-known areas ranging from Human Computer Interaction, Computer-Supported Cooperative Work, Organizational Processing, Distributed Problem Solving to Simulation of Social Systems and Organizations. Even if I am a philosopher with low artificialintelligence I will below try to say something about what the scope of DAI should be taken to be on conceptual and philosophical grounds. (In the later sections of the paper the central notion of joint intention will be the main topic - in order to illustrate how philosophers and DAI-researchers approach this issue.) Let us now consider the relationship between philosophy - especially philosophy of social action - and DAI. Both are concerned with social matters and in this sense seem to have a connection to social science proper. What kinds of questions should these areas of study be concerned with? In principle, ordinary social science should study all aspects of social life (in various societies and cultures), try to describe it and create general theories to explain it. (shrink)
I argue here that sophisticated AI systems, with the exception of those aimed at the psychological modeling of human cognition, must be based on general philosophical theories of rationality and, conversely, philosophical theories of rationality should be tested by implementing them in AI systems. So the philosophy and the AI go hand in hand. I compare human and generic rationality within a broad philosophy of AI and conclude by suggesting that ultimately, virtually all familiar philosophical problems will turn (...) out to be at least indirectly relevant to the task of building an autonomous rational agent, and conversely, the AI enterprise has the potential to throw light at least indirectly on most philosophical problems. (shrink)
In the Fall of 1983, I offered a junior/senior-level course in Philosophy of ArtificialIntelligence, in the Department of Philosophy at SUNY Fredonia, after returning there from a year’s leave to study and do research in computer science and artificialintelligence (AI) at SUNY Buffalo. Of the 30 students enrolled, most were computerscience majors, about a third had no computer background, and only a handful had studied any philosophy. (I might note that enrollments (...) have subsequently increased in the Philosophy Department’s AI-related courses, such as logic, philosophy of mind, and epistemology, and that several computer science students have added philosophy as a second major.) This article describes that course, provides material for use in such a course, and offers a bibliography of relevant articles in the AI, cognitive science, and philosophical literature. (shrink)
Buchanan and Darden have provided compelling reasons why philosophers of science concerned with the nature of scientific discovery should be aware of current work in artificialintelligence. This paper contends that artificialintelligence is even more than a source of useful analogies for the philosophy of discovery: the two fields are linked by interfield connections between philosophy of science and cognitive psychology and between cognitive psychology and artificialintelligence. Because the philosophy (...) of discovery must pay attention to the psychology of practicing scientists, and because current cognitive psychology adopts a computational view of mind with AI providing the richest models of how the mind works, the philosophy of discovery must also concern itself with AI models of mental operations. The relevance of the artificialintelligence notion of a frame to the philosophy of discovery is briefly discussed. (shrink)
The field of ArtificialIntelligence has been around for over 60 years now. Soon after its inception, the founding fathers predicted that within a few years an intelligent machine would be built. That prediction failed miserably. Not only hasn’t an intelligent machine been built, but we are not much closer to building one than we were some 50 years ago. Many reasons have been given for this failure, but one theme has been dominant since its advent in 1969: (...) The Frame Problem. What looked initially like an innocuous problem in logic, turned out to be a much broader and harder problem of holism and relevance in commonsense reasoning. Despite an enormous literature on the topic, there is still disagreement not only on whether the problem has been solved, but even what exactly the problem is. In this paper we provide a formal description of the initial problem, the early attempts at a solution, and its ramification both in AI as well as philosophy. (shrink)
A translation of the renowned French reference book, Vocabulaire de sciences cognitives , the Dictionary of Cognitive Science presents comprehensive definitions of more than 120 terms. The editor and advisory board of specialists have brought together 60 internationally recognized scholars to give the reader a comprehensive understanding of the most current and dynamic thinking in cognitive science. Topics range from Abduction to Writing, and each entry covers its subject from as many perspectives as possible within the domains of psychology, (...) class='Hi'>artificialintelligence, neuroscience, philosophy, and linguistics. This multidisciplinary work is an invaluable resource for all collections. (shrink)
Recent work in artificialintelligence has increasingly turned to argumentation as a rich, interdisciplinary area of research that can provide new methods related to evidence and reasoning in the area of law. Douglas Walton provides an introduction to basic concepts, tools and methods in argumentation theory and artificialintelligence as applied to the analysis and evaluation of witness testimony. He shows how witness testimony is by its nature inherently fallible and sometimes subject to disastrous failures. At (...) the same time such testimony can provide evidence that is not only necessary but inherently reasonable for logically guiding legal experts to accept or reject a claim. Walton shows how to overcome the traditional disdain for witness testimony as a type of evidence shown by logical positivists, and the views of trial sceptics who doubt that trial rules deal with witness testimony in a way that yields a rational decision-making process. (shrink)
The emotions have been one of the most fertile areas of study in psychology, neuroscience, and other cognitive disciplines. Yet as influential as the work in those fields is, it has not yet made its way to the desks of philosophers who study the nature of mind. Passionate Engines unites the two for the first time, providing both a survey of what emotions can tell us about the mind, and an argument for how work in the cognitive disciplines can help (...) us develop new ways of understanding the mind as a whole. Craig DeLancey shows that our best philosophical and scientific understanding of the emotions provides essential insights on key issues in the philosophy of mind and artificialintelligence: intentionality, aesthetics, rationality, action theory, moral psychology, consciousness, ontology and autonomy. He provides an accessible overview of the science of emotion, explaining with minimal jargon the technical issues that arise. The book also offers new ways to understand the mind, suggesting that it is autonomy--and not cognition--that should be the core problem of the philosophy of mind, cognitive science, and artificialintelligence. DeLancey argues that the philosophy of mind has been held back by an impoverished view of naturalism, and that a proper appreciation of the complexity of the sciences of mind, readily demonstrated by the science of emotion, will overcome this. Passionate Engines provides a unique, contemporary view of the link between science and philosophy, offering a bold new way of looking at the mind for scholars in a range of disciplines. Its accessible and refreshing approach will appeal to philosophers, psychologists, computer scientists, others in the cognitive disciplines, and lay people interested in the mind. (shrink)
Harry Collins interprets Hubert Dreyfus’s philosophy of embodiment as a criticism of all possible forms of artificialintelligence. I argue that this characterization is inaccurate and predicated upon a misunderstanding of the relevance of phenomenology for empirical scientific research.
AI needs many ideas that have hitherto been studied only by philosophers. This is because a robot, if it is to have human level intelligence and ability to learn from its experience, needs a general world view in which to organize facts. It turns out that many philosophical problems take new forms when thought about in terms of how to design a robot. Some approaches to philosophy are helpful and others are not.
In this article the question is raised whether artificialintelligence has any psychological relevance, i.e. contributes to our knowledge of how the mind/brain works. It is argued that the psychological relevance of artificialintelligence of the symbolic kind is questionable as yet, since there is no indication that the brain structurally resembles or operates like a digital computer. However, artificialintelligence of the connectionist kind may have psychological relevance, not because the brain is a (...) neural network, but because connectionist networks exhibit operating characteristics which mimic operant behavior. Finally it is concluded that, since most of the work done so far in AI and Law is of the symbolic kind, it has as yet contributed little to our understanding of the legal mind. (shrink)
Artificialintelligence has often been seen as an attempt to reduce the natural mind to informational processes and, consequently, to naturalize philosophy. The many criticisms that were addressed to the so-called “old-fashioned AI” do not concern this attempt itself, but the methods it used, especially the reduction of the mind to a symbolic level of abstraction, which has often appeared to be inadequate to capture the richness of our mental activity. As a consequence, there were many efforts (...) to evacuate the semantical models in favor of elementary physiological mechanisms simulated by information processes. However, these views, and the subsequent criticisms against artificialintelligence that they contain, miss the very nature of artificialintelligence, which is not reducible to a “science of the nature”, but which directly impacts our culture. More precisely, they lead to evacuate the role of the semantic information. In other words, they tend to throw the baby out with the bath-water. This paper tries to revisit the epistemology of artificialintelligence in the light of the opposition between the “sciences of nature” and the “sciences of culture”, which has been introduced by German neo-Kantian philosophers. It then shows how this epistemological view opens on the many contemporary applications of artificialintelligence that have already transformed—and will continue to transform—all our cultural activities and our world. Lastly, it places those perspectives in the context of the philosophy of information and more particularly it emphasizes the role played by the notions of context and level of abstraction in artificialintelligence. (shrink)
Economic value additions to knowledge and demand provide practical, embedded and extensible meaning to philosophizing cognitive systems. Evaluation of a cognitive system is an empirical matter. Thinking of science in terms of distributed cognition (interactionism) enlarges the domain of cognition. Anything that actually contributes to the specific quality of output of a cognitive system is part of the system in time and/or space. Cognitive science studies behaviour and knowledge structures of experts and categorized structures based on underlying structures. Knowledge representation (...) through understanding of ‘epistemic cultures’ is an evolutionary stage. But cognition goes beyond knowledge representation. Notwithstanding the importance of epistemology of phenomena, the practicability cum philosophical aspects of machine learning needs to be seen in dynamic behaviour in socio-economic-technical value additions if human machine interaction processes that are context specific are incorporated into strong artificial intelligent systems. Cognitive Science is also studied from both computational and biological angles. Evolution of interactive forms of reasoning through understanding of meta-language of computations or biological learning processes is possible. But the limitation of historical cultures predefines the role of interactive processes in user-networks beyond technology networks. Despite this limitation, inclusive development notions of a heterogeneous national society such as India or Europe can be tested and incorporated. (shrink)
Knowledge-based systems - Utopia and Reality. The following article is a response to K. Mainzer's 'Knowledge-Based Systems; Remarks on the Philosophy of Technology and ArtificialIntelligence'. We show, that Mainzer does not reach any of his aims - to analyse the possibilities and limits of AI-technology, - to reduce anxiousness and hostility against AI, which is motivated by phantastic speculations, - to evaluate the factual impact of AI on our lives and on society. His article contributes on (...) the contrary to phantastic speculations, which are not technologically justified in any way. There are two main reasons for his misleading view: (a) the state of the art of knowledge-based systems is incorrectly described; (b) the roots, paradigms and alternatives to AI are not in the least sufficiently analysed. We examine issues (a) and (b) in Chapter 1 and Chapter 2. In Chapter 3 we discuss, how the conclusions, which Mainzer draws, have to be modified. In analysing the lines of argumentation of Mainzer we try to clarify the methodological errors and the philosophical attitude of Mainzer, which is in many respects not adequate to the subject of the article. (shrink)
This paper examines the hypothesis that analogies may play a role in the generation of new ideas that are built into new explanatory theories. Methods of theory construction by analogy, by failed analogy, and by modular components from several analogies are discussed. Two different analyses of analogy are contrasted: direct mapping (Mary Hesse) and shared abstraction (Michael Genesereth). The structure of Charles Darwin's theory of natural selection shows various analogical relations. Finally, an "abstraction for selection theories" is shown to be (...) the structure of a number of theories. (shrink)
No kind of technology has had such a profound effect upon our lives and society as the new knowledge-based systems which start to overcome the traditional computer technology. Few areas of science raise such high expectations and meet with so much sceptical resistance as ArtificialIntelligence (AI). So it is the task of philosophy of science and technology to analyze the factual methodological possibilities of AI-technology. After a historical sketch of AI-development (Chapter 2), the technological foundations of (...) expert systems are described (Chapter 3). It is a surprising result of analysis that expert systems are technical realizations of well-known philosophical methodologies. In this very sense, AI is not only technology, but philosophy too (Chapter 4). On the other hand the question arises if knowledge-based systems can support the work of philosophers of science who want to explain the process of scientific research, inventions, and discoveries. This application of AI for the philosophical professionals is discussed in the 5th chapter. In the 6th chapter some scenarios of AI-technology are described which are expected in the nineties. Then, besides philosophy of science and technology, we have to consider the ethical questions which arise in evaluating the factual impact of AI-technology on our lives and society. (shrink)
The following article is a response to K. Mainzer's ‘Knowledge-Based Systems; Remarks on the Philosophy of Technology and ArtificialIntelligence’. We show, that Mainzer does not reach any of his aimsto analyse the possibilities and limits of AI-technology.to reduce anxiousness and hostility against AI, which is motivated by phantastic speculations.to evaluate the factual impact of AI on our lives and on society.His article contributes on the contrary to phantastic speculations, which are not technologically justified in any way. (...) There are two main reasons for his misleading view: (a) the state of the art of knowledge-based systems is incorrectly described; (b) the roots, paradigms and alternatives to AI are not in the least sufficiently analysed. We examine issues (a) and (b) in Chapter 1 and Chapter 2. In Chapter 3 we discuss, how the conclusions, which Mainzer draws, have to be modified. In analysing the lines of argumentation of Mainzer we try to clarify the methodological errors and the philosophical attitude of Mainzer, which is in many respects not adequate to the subject of the article. (shrink)
Abstract Artificial language philosophy (also called ‘ideal language philosophy’) is the position that philosophical problems are best solved or dissolved through a reform of language. Its underlying methodology—the development of languages for specific purposes—leads to a conventionalist view of language in general and of concepts in particular. I argue that many philosophical practices can be reinterpreted as applications of artificial language philosophy. In addition, many factually occurring interrelations between the sciences and philosophy of science (...) are justified and clarified by the assumption of an artificial language methodology. Content Type Journal Article Category Original paper in Philosophy of Science Pages 1-23 DOI 10.1007/s13194-011-0042-6 Authors Sebastian Lutz, Theoretical Philosophy Unit, Utrecht University, Postbus 80126, 3508 TC Utrecht, The Netherlands Journal European Journal for Philosophy of Science Online ISSN 1879-4920 Print ISSN 1879-4912. (shrink)
ArtificialIntelligence has become big business in the military and in many industries. In spite of this growth there still remains no consensus about what AI really is. The major factor which seems to be responsible for this is the lack of agreement about the relationship between behavior and intelligence. In part certain ethical concerns generated from saying who, what and how intelligence is determined may be facilitating this lack of agreement.
The study of consciousness has today moved beyond neurobiology and cognitive models. In the past few years, there has been a surge of research into various newer areas. The present article looks at the non-neurobiological and non-cognitive theories regarding this complex phenomenon, especially ones that self-psychology, self-theory, artificialintelligence, quantum physics, visual cognitive science and philosophy have to offer. Self-psychology has proposed the need to understand the self and its development, and the ramifications of the self for (...) morality and empathy, which will help us understand consciousness better. There have been inroads made from the fields of computer science, machine technology and artificialintelligence, including robotics, into understanding the consciousness of these machines and their implications for human consciousness. These areas are explored. Visual cortex and emotional theories along with their implications are discussed. The phylogeny and evolution of the phenomenon of consciousness is also highlighted, with theories on the emergence of consciousness in fetal and neonatal life. Quantum physics and its insights into the mind, along with the implications of consciousness and physics and their interface are debated. The role of neurophilosophy to understand human consciousness, the functions of such a concept, embodiment, the dark side of consciousness, future research needs and limitations of a scientific theory of consciousness complete the review. The importance and salient features of each theory are discussed along with certain pitfalls, if present. A need for the integration of various theories to understand consciousness from a holistic perspective is stressed. (shrink)
Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which iswhich? (...) Now we can index TTT to a particular animal and its syntheticcorrelate. We might therefore have TTTrat, TTTcat,TTTdog, and so on. These tests, as we explain herein, are abetter barometer of artificialintelligence (AI) than Turing's originalTT, because AI seems to have ammunition sufficient only to reach thelevel of artificial animal, not artificial person. (shrink)
Abstract: In the course of seeking an answer to the question "How do you know you are not a zombie?" Floridi (2005) issues an ingenious, philosophically rich challenge to artificialintelligence (AI) in the form of an extremely demanding version of the so-called knowledge game (or "wise-man puzzle," or "muddy-children puzzle")—one that purportedly ensures that those who pass it are self-conscious. In this article, on behalf of (at least the logic-based variety of) AI, I take up the challenge—which (...) is to say, I try to show that this challenge can in fact be met by AI in the foreseeable future. (shrink)