In this article the question is raised whether artificialintelligence has any psychological relevance, i.e. contributes to our knowledge of how the mind/brain works. It is argued that the psychological relevance of artificialintelligence of the symbolic kind is questionable as yet, since there is no indication that the brain structurally resembles or operates like a digital computer. However, artificialintelligence of the connectionist kind may have psychological relevance, not because the brain is a (...) neural network, but because connectionist networks exhibit operating characteristics which mimic operant behavior. Finally it is concluded that, since most of the work done so far in AI and Law is of the symbolic kind, it has as yet contributed little to our understanding of the legal mind. (shrink)
Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which iswhich? (...) Now we can index TTT to a particular animal and its syntheticcorrelate. We might therefore have TTTrat, TTTcat,TTTdog, and so on. These tests, as we explain herein, are abetter barometer of artificialintelligence (AI) than Turing's originalTT, because AI seems to have ammunition sufficient only to reach thelevel of artificial animal, not artificial person. (shrink)
Abstract: In the course of seeking an answer to the question "How do you know you are not a zombie?" Floridi (2005) issues an ingenious, philosophically rich challenge to artificialintelligence (AI) in the form of an extremely demanding version of the so-called knowledge game (or "wise-man puzzle," or "muddy-children puzzle")—one that purportedly ensures that those who pass it are self-conscious. In this article, on behalf of (at least the logic-based variety of) AI, I take up the challenge—which (...) is to say, I try to show that this challenge can in fact be met by AI in the foreseeable future. (shrink)
The emotions have been one of the most fertile areas of study in psychology, neuroscience, and other cognitive disciplines. Yet as influential as the work in those fields is, it has not yet made its way to the desks of philosophers who study the nature of mind. Passionate Engines unites the two for the first time, providing both a survey of what emotions can tell us about the mind, and an argument for how work in the cognitive disciplines can help (...) us develop new ways of understanding the mind as a whole. Craig DeLancey shows that our best philosophical and scientific understanding of the emotions provides essential insights on key issues in the philosophy of mind and artificialintelligence: intentionality, aesthetics, rationality, action theory, moral psychology, consciousness, ontology and autonomy. He provides an accessible overview of the science of emotion, explaining with minimal jargon the technical issues that arise. The book also offers new ways to understand the mind, suggesting that it is autonomy--and not cognition--that should be the core problem of the philosophy of mind, cognitive science, and artificialintelligence. DeLancey argues that the philosophy of mind has been held back by an impoverished view of naturalism, and that a proper appreciation of the complexity of the sciences of mind, readily demonstrated by the science of emotion, will overcome this. Passionate Engines provides a unique, contemporary view of the link between science and philosophy, offering a bold new way of looking at the mind for scholars in a range of disciplines. Its accessible and refreshing approach will appeal to philosophers, psychologists, computer scientists, others in the cognitive disciplines, and lay people interested in the mind. (shrink)
The peculiarity of the relationship between philosophy and ArtificialIntelligence (AI) has been evidenced since the advent of AI. This paper aims to put the basis of an extended and well founded philosophy of AI: it delineates a multi-layered general framework to which different contributions in the field may be traced back. The core point is to underline how in the same scenario both the role of philosophy on AI and role of AI on philosophy must be considered. (...) Moreover, this framework is revised and extended in the light of the consideration of a type of multiagent system devoted to afford the issue of scientific discovery both from a conceptual and from a practical point of view. (shrink)
Harry Collins interprets Hubert Dreyfus’s philosophy of embodiment as a criticism of all possible forms of artificialintelligence. I argue that this characterization is inaccurate and predicated upon a misunderstanding of the relevance of phenomenology for empirical scientific research.
The aims of this paper are threefold: To show that game-playing (GP), the discipline of ArtificialIntelligence (AI) concerned with the development of automated game players, has a strong epistemological relevance within both AI and the vast area of cognitive sciences. In this context games can be seen as a way of securely reducing (segmenting) real-world complexity, thus creating the laboratory environment necessary for testing the diverse types and facets of intelligence produced by computer models. This paper (...) aims to promote the belief that games represent an excellent tool for the project of computational psychology (CP). To underline how, despite this, GP has mainly adopted an engineering-inspired methodology and in doing so has distorted the framework of cognitive functionalism. Many successes (i.e. chess, checkers) have been achieved refusing human-like reasoning. The AI has appeared to work well despite ignoring an intrinsic motivation, that of creating an explanatory link between machines and mind. To assert that substantial improvements in GP may be obtained in the future only by renewed interest in human-inspired models of reasoning and in other cognitive studies. In fact, if we increase the complexity of games (from NP-Completeness to AI-Completeness) in order to reproduce real-life problems, computer science techniques enter an impasse. Many of AI’s recent GP experiences can be shown to validate this. The lack of consistent philosophical foundations for cognitive AI and the minimal philosophical commitment of AI investigation are two of the major reasons that play an important role in explaining why CP has been overlooked. (shrink)
This article examines argument structures and strategies in pro and con argumentation about the possibility of human-level artificialintelligence (AI) in the near term future. It examines renewed controversy about strong AI that originated in a prominent 1999 book and continued at major conferences and in periodicals, media commentary, and Web-based discussions through 2002. It will be argued that the book made use of implicit, anticipatory refutation to reverse prevailing value hierarchies related to AI. Drawing on Perelman and (...) Olbrechts-Tyteca's (1969) study of refutational argument, this study considers points of contact between opposing arguments that emerged in opposing loci, dissociations, and casuistic reasoning. In particular, it shows how perceptions of AI were reframed and rehabilitated through metaphorical language, reversal of the philosophical pair artificial/natural, appeals to the paradigm case, and use of the loci of quantity and essence. Furthermore, examining responses to the book in subsequent arguments indicates the topoi characteristic of the rhetoric of technology advocacy. (shrink)
ArtificialIntelligence has become big business in the military and in many industries. In spite of this growth there still remains no consensus about what AI really is. The major factor which seems to be responsible for this is the lack of agreement about the relationship between behavior and intelligence. In part certain ethical concerns generated from saying who, what and how intelligence is determined may be facilitating this lack of agreement.
Recent work in artificialintelligence has increasingly turned to argumentation as a rich, interdisciplinary area of research that can provide new methods related to evidence and reasoning in the area of law. Douglas Walton provides an introduction to basic concepts, tools and methods in argumentation theory and artificialintelligence as applied to the analysis and evaluation of witness testimony. He shows how witness testimony is by its nature inherently fallible and sometimes subject to disastrous failures. At (...) the same time such testimony can provide evidence that is not only necessary but inherently reasonable for logically guiding legal experts to accept or reject a claim. Walton shows how to overcome the traditional disdain for witness testimony as a type of evidence shown by logical positivists, and the views of trial sceptics who doubt that trial rules deal with witness testimony in a way that yields a rational decision-making process. (shrink)
One of the central factors influencing the process and the outcome of technology transfer is the nature of the technology being transferred. This paper identifies and discusses the main characteristics of ArtificialIntelligence (AI) technology from the point of view of international technology transfer. It attempts to indicate the peculiarities of AI in this context and move towards a framework to assist recipient decision makers in optimising the formulation of their policies on AI technology transfer.
During the 1950s, there was a burst of enthusiasm about whether artificialintelligence might surpass human intelligence. Since then, technology has changed society so dramatically that the focus of study has shifted toward society’s ability to adapt to technological change. Technology and rapid communications weaken the capacity of society to integrate into the broader social structure those people who have had little or no access to education. (Most of the recent use of communications by the excluded has (...) been disruptive, not integrative.) Interweaving of socioeconomic activity and large-scale systems had a dehumanizing effect on people excluded from social participation by these trends. Jobs vanish at an accelerating rate. Marketing creates demand for goods which stress the global environment, even while the global environment no longer yields readily accessible resources. Mining and petroleum firms push into ever more challenging environments (e.g., deep mines and seabed mining) to meet resource demands. These activities are expensive, and resource prices rise rapidly, further excluding groups that cannot pay for these resources. The impact of large-scale systems on society leads to mass idleness, with the accompanying threat of violent reaction as unemployed masses seek to blame both people in power as well as the broader social structure for their plight. Perhaps, the impact of large-scale systems on society has already eroded essential qualities of humanness. Humans, when they feel “socially useless,” are dehumanized. (At the same time, machines (at any scale) seem incapable of emotion or empathy.) Has the cost of technological progress been too high to pay? These issues are addressed in this paper. (shrink)
The introduction of massive parallelism and the renewed interest in neural networks gives a new need to evaluate the relationship of symbolic processing and artificialintelligence. The physical symbol hypothesis has encountered many difficulties coping with human concepts and common sense. Expert systems are showing more promise for the early stages of learning than for real expertise. There is a need to evaluate more fully the inherent limitations of symbol systems and the potential for programming compared with training. (...) This can give more realistic goals for symbolic systems, particularly those based on logical foundations. (shrink)
This paper presents work in progress on artificialintelligence in medicine (AIM) within the larger context of cognitive science. It introduces and develops the notion ofemergence both as an inevitable evolution of artificialintelligence towards machine learning programs and as the result of a synergistic co-operation between the physician and the computer. From this perspective, the emergence of knowledge takes placein fine in the expert's mind and is enhanced both by computerised strategies of induction and deduction, (...) and by software abilities to dialogue, co-operate and function as a cognitive extension of the physician's intellectual capabilities. The proposed methodology gives the expert a prominent role which consists, first, of faithfully enunciating the descriptive features of his medical knowledge, thus giving the computer a precise description of his own perception of basic medicine, and secondly, of painstakingly gathering patients into computerised case bases which simulate exhaustive long-term memory. The AI capacities for knowledge elaboration are then triggered, giving rise to mathematically optimal diagnoses, prognoses, or treatment protocols which the physician may then evaluate, accept, reject, or adapt at his convenience, and finally append to a knowledge base. The idea of emergence embraces many issues concerning the purpose and intent of artificialintelligence in medical practice. Particularly, we address the representation problem as it is raised by classical decisional knowledge-based systems, and develop the notions of classifiers and hybrid systems as possible answers to this problem. Finally, since the whole methodology touches the problem of technological investment in health care, now inherent in modern medical practice, some ethical considerations accompany the exposé. (shrink)
The aim of my contribution is to try to analyse some points of similarity and difference between post-Parsonian social systems theory models for sociology — with special reference to those of W. Buckley, F.E. Emery and N. Luhmann — and expert systems models1 from ArtificialIntelligence. I keep specifically to post-Parsonian systems theories within sociology because they assume some postulates and criteria derived from cybernetics and which are at the roots of AI. I refer in particular to the (...) fundamental relevance of the system-environment relationship in both sociology and AI. (shrink)
Artificialintelligence is presented as a set of tools with which we can try to come to terms with human problems, and with the assistance of which, some human problems can be solved. Artificialintelligence is located in its social context, in terms of the environment within which it is developed, and the applications to which it is put. Drawing on social theory, there is consideration of the collaborative and social problem-solving processes which are involved in (...)artificialintelligence and society. In a look ahead to the coming generations of highly parallel computing systems, it is suggested that lessons can be learnt from the highly parallel processes of human social problem-solving. (shrink)
The age of artificialintelligence (AI) is upon us, and its effect upon society in the coming years will be noteworthy. Artificialintelligence is a field that encompasses such applications as robotics, expert systems, natural language understanding, speech recognition, and computer vision. The effect of these AI systems upon existing and future job occupations will be important. This paper takes a look at artificialintelligence in terms of the creation of new job categories. Also, (...) the introduction of AI into the organization to better familiarize the employees about AI will be discussed. (shrink)
The current renewal of connectionist techniques using networks of neuron-like units has started to have an influence on cognitive modelling. However, compared with classical artificialintelligence methods, the position of connectionism is still not clear. In this article artificialintelligence and connectionism are systematically compared as cognitive models so as to bring out the advantages and shortcomings of each. The problem of structured representations appears to be particularly important, suggesting likely research directions.
Distributed ArtificialIntelligence (DAI) deals with computational systems where several intelligent components interact in a common environment. This paper is aimed at pointing out and fostering the exchange between DAI and cognitive and social science in order to deal with the issues of interaction, and in particular with the reasons and possible strategies for social behaviour in multi-agent interaction is also described which is motivated by requirements of cognitive plausibility and grounded the notions of power, dependence and help. (...) Connections with human-computer interaction are also suggested. (shrink)
The paper discusses the characteristics of Biological Intelligence (BI) and its differences with artificialintelligence. In particular the plasticity of the nervous system is considered in the different forms with special attention to deterministic and localizationist views of the brain vs holistic approaches. When memory and learning are considered the localizationist views do not offer a possible solution to a number of problems while memory may be better conceptualized in terms of categorization procedures and generalizing strategies. Finally, (...) the problem of individual variability, an important feature in terms of BI, is considered. The legitimacy of analogies between BI and AI is discussed and the necessity for an innovative approach to the field of AI is stressed. (shrink)
In artificialintelligence (AI), a number of criticisms were raised against the use of probability for dealing with uncertainty. All these criticisms, except what in this article we call the non-adequacy claim, have been eventually confuted. The non-adequacy claim is an exception because, unlike the other criticisms, it is exquisitely philosophical and, possibly for this reason, it was not discussed in the technical literature. A lack of clarity and understanding of this claim had a major impact on AI. (...) Indeed, mostly leaning on this claim, some scientists developed an alternative research direction and, as a result, the AI community split in two schools: a probabilistic and an alternative one. In this article, we argue that the non-adequacy claim has a strongly metaphysical character and, as such, should not be accepted as a conclusive argument against the adequacy of probability. (shrink)
Small batch manufacture dominates the manufacturing sector of a growing number of industrialised countries. The organisational structures and management methods currently adopted in such enterprises are firmly based upon historical developments which started with individual craftsmen. These structures and methods are primarily concerned with the co-ordination of human activities, rather than with the management of theknowledge process underlying the creation of products.This paper argues that it is the failure to understand this knowledge process and its effective integration at aKnowledge Level (...) which presents the real barrier to increased flexibility, not, as is presently perceived, a lack of suitableInformation Level integration. Potential techniques and methodologies for achievingKnowledge Level integration are beginning to emerge from ArtificialIntelligence research. Realisation of full Knowledge Level integration will not only require further research into the AI techniques and methodologies involved, but also an understanding of the wider human aspects of their application. Some questions concerning the effective coupling of human and artificialintelligence to achieve Knowledge Level integration of the product creation process are presented. (shrink)
The paper identifies and assesses the implications of two approaches to the field of artificialintelligence and legal reasoning. The first — pragmatism — concentrates on the development of working systems to the exclusion of theoretical problems. The second — purism — focuses on the nature of the law and of intelligence with no regard for the delivery of commercially viable systems. Past work in AI and law is classified in terms of this division. By reference to (...) The Latent Damage System, an operational system, the paper articulates and responds to conceivable purist (jurisprudential and AI) objections to such a program. The methods of the pragmatist are also called into question and refined. The author concludes that pragmatism within a purist framework is the only sound approach to developing reliable AI systems in law. (shrink)
Over the years, AI has undergone a transformation from its original aim of producing an ‘intelligent’ machine to that of producing pragmatic solutions of problems of the market place. In doing so, AI has made a significant contribution to the debate on whether the computer is an instrument or an interlocutor. This paper discusses issues of problem solving and creativity underlying this transformation, and attempts to clarify the distinction between theresolutive intelligence andproblematic intelligence. It points out that the (...) advance of ‘intelligent’ technology, with its failure to make a clear distinction betweenresolutive andcreative intelligence, could contribute to the further cultural marginalisation of human activities not connected with production. A further danger is that AI products may suffer a further loss of social reputation and prestige for those activities for which it is not possible to build artificial devices. (shrink)
Machine ethics and robot rights are quickly becoming hot topics in artificialintelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In (...) particular, we challenge the scientific community to develop intelligent systems that have human-friendly values that they provably retain, even under recursive self-improvement. (shrink)
The Turing Test (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in anoff-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing Machine(TM), that consequently (...) a TM that could pass the TT could be built, isattacked on the grounds that not all TMs are constructible in a plannedway. This observation points towards the importance of developmentalprocesses that use random elements (e.g., evolution), but in these casesit becomes problematic to call the result artificial. This hasimplications for the means by which intelligent agents could bedeveloped. (shrink)
The Turing Test (TT), as originally specified, centres on theability to perform a social role. The TT can be seen as a test of anability to enter into normal human social dynamics. In this light itseems unlikely that such an entity can be wholly designed in an off-line mode; rather a considerable period of training insitu would be required. The argument that since we can pass the TT,and our cognitive processes might be implemented as a Turing Machine(TM), that consequently a (...) TM that could pass the TT could be built, isattacked on the grounds that not all TMs are constructible in a plannedway. This observation points towards the importance of developmentalprocesses that use random elements (e.g., evolution), but in these casesit becomes problematic to call the result artificial. This hasimplications for the means by which intelligent agents could bedeveloped. (shrink)
Considerations of personal identity bear on John Searle's Chinese Room argument, and on the opposed position that a computer itself could really understand a natural language. In this paper I develop the notion of a virtual person, modelled on the concept of virtual machines familiar in computer science. I show how Searle's argument, and J. Maloney's attempt to defend it, fail. I conclude that Searle is correct in holding that no digital machine could understand language, but wrong in holding that (...)artificial minds are impossible: minds and persons are not the same as the machines, biological or electronic, that realize them. (shrink)
This paper discusses different approaches incognitive science and artificial intelligenceresearch from the perspective of radicalconstructivism, addressing especially theirrelation to the biologically based theories ofvon Uexküll, Piaget as well as Maturana andVarela. In particular recent work in New AI and adaptive robotics on situated and embodiedintelligence is examined, and we discuss indetail the role of constructive processes asthe basis of situatedness in both robots andliving organisms.
The growing interest in AI in advance capitalist societies can be understood not just in relation to its practial achievements, which remain modest, but also in its ideological role as a technological paradign for the reconstruction of capitalism. This is similar to the role played by scientific management during the second industrial revolution, circa 1880–1930, and involves the extension of the rationalization and routinization of labour to mental work. The conception of human intelligence and the emphasis on command and (...) control systems of much contemporary AI research reflects its close relationship with the US military and corporate capital, which are the sources of many of AI's key metaphors and anolgies. (shrink)
I argue here that sophisticated AI systems, with the exception of those aimed at the psychological modeling of human cognition, must be based on general philosophical theories of rationality and, conversely, philosophical theories of rationality should be tested by implementing them in AI systems. So the philosophy and the AI go hand in hand. I compare human and generic rationality within a broad philosophy of AI and conclude by suggesting that ultimately, virtually all familiar philosophical problems will turn out to (...) be at least indirectly relevant to the task of building an autonomous rational agent, and conversely, the AI enterprise has the potential to throw light at least indirectly on most philosophical problems. (shrink)