There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. (...) We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can (...) expect, what the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
In this article the question is raised whether artificialintelligence has any psychological relevance, i.e. contributes to our knowledge of how the mind/brain works. It is argued that the psychological relevance of artificialintelligence of the symbolic kind is questionable as yet, since there is no indication that the brain structurally resembles or operates like a digital computer. However, artificialintelligence of the connectionist kind may have psychological relevance, not because the brain is a (...) neural network, but because connectionist networks exhibit operating characteristics which mimic operant behavior. Finally it is concluded that, since most of the work done so far in AI and Law is of the symbolic kind, it has as yet contributed little to our understanding of the legal mind. (shrink)
The first decade of this century has seen the nascency of the first mathematical theory of general artificialintelligence. This theory of Universal ArtificialIntelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the (...) award-winning PhD thesis (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI. (shrink)
Recent work in artificialintelligence has increasingly turned to argumentation as a rich, interdisciplinary area of research that can provide new methods related to evidence and reasoning in the area of law. Douglas Walton provides an introduction to basic concepts, tools and methods in argumentation theory and artificialintelligence as applied to the analysis and evaluation of witness testimony. He shows how witness testimony is by its nature inherently fallible and sometimes subject to disastrous failures. At (...) the same time such testimony can provide evidence that is not only necessary but inherently reasonable for logically guiding legal experts to accept or reject a claim. Walton shows how to overcome the traditional disdain for witness testimony as a type of evidence shown by logical positivists, and the views of trial sceptics who doubt that trial rules deal with witness testimony in a way that yields a rational decision-making process. (shrink)
THE CASE FOR GOVERNMENT BY ARTIFICIALINTELLIGENCE. Tired of election madness? The rhetoric of politicians? Their unreliable promises? And less than good government? -/- Until recently, it hasn’t been hard for people to give up control to computers. Not very many people miss the effort and time required to do calculations by hand, to keep track of their finances, or to complete their tax returns manually. But relinquishing direct human control to self-driving cars is expected to be more (...) of a challenge, despite the predicted decrease in vehicle accidents thanks to artificialintelligence that isn’t subject to human distractions and errors of judgment. -/- If turning vehicle control over to artificialintelligence is a challenge, it is a very mild one compared with the idea that we might one day recognize and want to implement the advantages of human government by AI. But, like autonomous vehicle control, government by AI is likely to offer decided benefits. -/- In other publications, the author has studied a variety of widespread human limitations that, throughout human history, have led to much human suffering as well as ecological destruction. For the first time, these psychological and cognitive human shortcomings are taken into account in an essay that makes the case for government by artificialintelligence. (shrink)
The enduring progression of artificialintelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificialintelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their (...) specification of preventing human harm, stipulating obedience to humans and incorporating robotic self-protection. However the overwhelming predominance in the study of this field has focussed on human–robot interactions without fully considering the ethical inevitability of future artificial intelligences communicating together and has not addressed the moral nature of robot–robot interactions. A new robotic law is proposed and termed AIonAI or artificialintelligence-on-artificialintelligence. This law tackles the overlooked area where future artificial intelligences will likely interact amongst themselves, potentially leading to exploitation. As such, they would benefit from adopting a universal law of rights to recognise inherent dignity and the inalienable rights of artificial intelligences. Such a consideration can help prevent exploitation and abuse of rational and sentient beings, but would also importantly reflect on our moral code of ethics and the humanity of our civilisation. (shrink)
The peculiarity of the relationship between philosophy and ArtificialIntelligence (AI) has been evidenced since the advent of AI. This paper aims to put the basis of an extended and well founded philosophy of AI: it delineates a multi-layered general framework to which different contributions in the field may be traced back. The core point is to underline how in the same scenario both the role of philosophy on AI and role of AI on philosophy must be considered. (...) Moreover, this framework is revised and extended in the light of the consideration of a type of multiagent system devoted to afford the issue of scientific discovery both from a conceptual and from a practical point of view. (shrink)
Report for "The Reasoner" on the conference "Philosophy and Theory of ArtificialIntelligence", 3 & 4 October 2011, Thessaloniki, Anatolia College/ACT, http://www.pt-ai.org. --- Organization: Vincent C. Müller, Professor of Philosophy at ACT & James Martin Fellow, Oxford http://www.sophia.de --- Sponsors: EUCogII, Oxford-FutureTech, AAAI, ACM-SIGART, IACAP, ECCAI.
Abstract: In the course of seeking an answer to the question "How do you know you are not a zombie?" Floridi (2005) issues an ingenious, philosophically rich challenge to artificialintelligence (AI) in the form of an extremely demanding version of the so-called knowledge game (or "wise-man puzzle," or "muddy-children puzzle")—one that purportedly ensures that those who pass it are self-conscious. In this article, on behalf of (at least the logic-based variety of) AI, I take up the challenge—which (...) is to say, I try to show that this challenge can in fact be met by AI in the foreseeable future. (shrink)
The enduring innovations in artificialintelligence and robotics offer the promised capacity of computer consciousness, sentience and rationality. The development of these advanced technologies have been considered to merit rights, however these can only be ascribed in the context of commensurate responsibilities and duties. This represents the discernable next-step for evolution in this field. Addressing these needs requires attention to the philosophical perspectives of moral responsibility for artificialintelligence and robotics. A contrast to the moral status (...) of animals may be considered. At a practical level, the attainment of responsibilities by artificialintelligence and robots can benefit from the established responsibilities and duties of human society, as their subsistence exists within this domain. These responsibilities can be further interpreted and crystalized through legal principles, many of which have been conserved from ancient Roman law. The ultimate and unified goal of stipulating these responsibilities resides through the advancement of mankind and the enduring preservation of the core tenets of humanity. (shrink)
The theory and philosophy of artificialintelligence has come to a crucial point where the agenda for the forthcoming years is in the air. This special volume of Minds and Machines presents leading invited papers from a conference on the “Philosophy and Theory of ArtificialIntelligence” that was held in October 2011 in Thessaloniki. ArtificialIntelligence is perhaps unique among engineering subjects in that it has raised very basic questions about the nature of computing, (...) perception, reasoning, learning, language, action, interaction, consciousness, humankind, life etc. etc. – and at the same time it has contributed substantially to answering these questions. There is thus a substantial tradition of work, both on AI by philosophers and of theory within AI itself. - The volume contains papers by Bostrom, Dreyfus, Gomila, O'Regan and Shagrir. (shrink)
[Müller, Vincent C. (ed.), (2013), Philosophy and theory of artificialintelligence (SAPERE, 5; Berlin: Springer). 429 pp. ] --- Can we make machines that think and act like humans or other natural intelligent agents? The answer to this question depends on how we see ourselves and how we see the machines in question. Classical AI and cognitive science had claimed that cognition is computation, and can thus be reproduced on other computing machines, possibly surpassing the abilities of human (...)intelligence. This consensus has now come under threat and the agenda for the philosophy and theory of AI must be set anew, re-defining the relation between AI and Cognitive Science. We can re-claim the original vision of general AI from the technical AI disciplines; we can reject classical cognitive science and replace it with a new theory (e.g. embodied); or we can try to find new ways to approach AI, for example from neuroscience or from systems theory. To do this, we must go back to the basic questions on computing, cognition and ethics for AI. The 30 papers in this volume provide cutting-edge work from leading researchers that define where we stand and where we should go from here. (shrink)
ArtificialIntelligence and Scientific Method examines the remarkable advances made in the field of AI over the past twenty years, discussing their profound implications for philosophy. Taking a clear, non-technical approach, Donald Gillies shows how current views on scientific method are challenged by this recent research, and suggests a new framework for the study of logic. Finally, he draws on work by such seminal thinkers as Bacon, Gdel, Popper, Penrose, and Lucas, to address the hotly-contested question of whether (...) computers might become intellectually superior to human beings. (shrink)
Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which iswhich? (...) Now we can index TTT to a particular animal and its syntheticcorrelate. We might therefore have TTTrat, TTTcat,TTTdog, and so on. These tests, as we explain herein, are abetter barometer of artificialintelligence (AI) than Turing's originalTT, because AI seems to have ammunition sufficient only to reach thelevel of artificial animal, not artificial person. (shrink)
The aims of this paper are threefold: To show that game-playing (GP), the discipline of ArtificialIntelligence (AI) concerned with the development of automated game players, has a strong epistemological relevance within both AI and the vast area of cognitive sciences. In this context games can be seen as a way of securely reducing (segmenting) real-world complexity, thus creating the laboratory environment necessary for testing the diverse types and facets of intelligence produced by computer models. This paper (...) aims to promote the belief that games represent an excellent tool for the project of computational psychology (CP). To underline how, despite this, GP has mainly adopted an engineering-inspired methodology and in doing so has distorted the framework of cognitive functionalism. Many successes (i.e. chess, checkers) have been achieved refusing human-like reasoning. The AI has appeared to work well despite ignoring an intrinsic motivation, that of creating an explanatory link between machines and mind. To assert that substantial improvements in GP may be obtained in the future only by renewed interest in human-inspired models of reasoning and in other cognitive studies. In fact, if we increase the complexity of games (from NP-Completeness to AI-Completeness) in order to reproduce real-life problems, computer science techniques enter an impasse. Many of AI’s recent GP experiences can be shown to validate this. The lack of consistent philosophical foundations for cognitive AI and the minimal philosophical commitment of AI investigation are two of the major reasons that play an important role in explaining why CP has been overlooked. (shrink)
In artificialintelligence (AI), a number of criticisms were raised against the use of probability for dealing with uncertainty. All these criticisms, except what in this article we call the non-adequacy claim, have been eventually confuted. The non-adequacy claim is an exception because, unlike the other criticisms, it is exquisitely philosophical and, possibly for this reason, it was not discussed in the technical literature. A lack of clarity and understanding of this claim had a major impact on AI. (...) Indeed, mostly leaning on this claim, some scientists developed an alternative research direction and, as a result, the AI community split in two schools: a probabilistic and an alternative one. In this article, we argue that the non-adequacy claim has a strongly metaphysical character and, as such, should not be accepted as a conclusive argument against the adequacy of probability. (shrink)
In this paper we want to analyze some philosophical and epistemological connections between a new kind of technology recently developed within robotics, and the previous mechanical approach. A new paradigm about machine-design in robotics, currently defined as ‘Embodied Intelligence’, has recently been developed. Here we consider the debate on the relationship between the hand and the intellect, from the perspective of the history of philosophy, aiming at providing a more suitable understanding of this paradigm. The new bottom-up approach to (...) design is deeply rooted in a new kind of empiricism, which tries to overcome issues connected with the previous approach strongly committed with the ArtificialIntelligence (AI) debate and its origin. Since Turing’s time, the AI debate showed a rationalistic bias which remained undisputed until now. The paradigm shift we are witnessing nowadays is a reply to that bias in order to achieve not only a better way to design robots, but also to understand some underlying epistemological remarks. (shrink)
Artificialintelligence is presented as a set of tools with which we can try to come to terms with human problems, and with the assistance of which, some human problems can be solved. Artificialintelligence is located in its social context, in terms of the environment within which it is developed, and the applications to which it is put. Drawing on social theory, there is consideration of the collaborative and social problem-solving processes which are involved in (...)artificialintelligence and society. In a look ahead to the coming generations of highly parallel computing systems, it is suggested that lessons can be learnt from the highly parallel processes of human social problem-solving. (shrink)
Though it''s difficult to agree on the exact date of their union, logic and artificialintelligence (AI) were married by the late 1950s, and, at least during their honeymoon, were happily united. What connubial permutation do logic and AI find themselves in now? Are they still (happily) married? Are they divorced? Or are they only separated, both still keeping alive the promise of a future in which the old magic is rekindled? This paper is an attempt to answer (...) these questions via a review of six books. Encapsulated, our answer is that (i) logic and AI, despite tabloidish reports to the contrary, still enjoy matrimonial bliss, and (ii) only their future robotic offspring (as opposed to the children of connectionist AI) will mark real progress in the attempt to understand cognition. (shrink)
The age of artificialintelligence (AI) is upon us, and its effect upon society in the coming years will be noteworthy. Artificialintelligence is a field that encompasses such applications as robotics, expert systems, natural language understanding, speech recognition, and computer vision. The effect of these AI systems upon existing and future job occupations will be important. This paper takes a look at artificialintelligence in terms of the creation of new job categories. Also, (...) the introduction of AI into the organization to better familiarize the employees about AI will be discussed. (shrink)
DeLancey shows that our understanding of emotion provides essential insight on key issues in philosophy of mind and artificialintelligence. He offers us a bold new approach to the study of the mind based on the latest scientific research and provides an accessible overview of the science of emotion.
Small batch manufacture dominates the manufacturing sector of a growing number of industrialised countries. The organisational structures and management methods currently adopted in such enterprises are firmly based upon historical developments which started with individual craftsmen. These structures and methods are primarily concerned with the co-ordination of human activities, rather than with the management of theknowledge process underlying the creation of products.This paper argues that it is the failure to understand this knowledge process and its effective integration at aKnowledge Level (...) which presents the real barrier to increased flexibility, not, as is presently perceived, a lack of suitableInformation Level integration. Potential techniques and methodologies for achievingKnowledge Level integration are beginning to emerge from ArtificialIntelligence research. Realisation of full Knowledge Level integration will not only require further research into the AI techniques and methodologies involved, but also an understanding of the wider human aspects of their application. Some questions concerning the effective coupling of human and artificialintelligence to achieve Knowledge Level integration of the product creation process are presented. (shrink)
The current renewal of connectionist techniques using networks of neuron-like units has started to have an influence on cognitive modelling. However, compared with classical artificialintelligence methods, the position of connectionism is still not clear. In this article artificialintelligence and connectionism are systematically compared as cognitive models so as to bring out the advantages and shortcomings of each. The problem of structured representations appears to be particularly important, suggesting likely research directions.
During the 1950s, there was a burst of enthusiasm about whether artificialintelligence might surpass human intelligence. Since then, technology has changed society so dramatically that the focus of study has shifted toward society’s ability to adapt to technological change. Technology and rapid communications weaken the capacity of society to integrate into the broader social structure those people who have had little or no access to education. (Most of the recent use of communications by the excluded has (...) been disruptive, not integrative.) Interweaving of socioeconomic activity and large-scale systems had a dehumanizing effect on people excluded from social participation by these trends. Jobs vanish at an accelerating rate. Marketing creates demand for goods which stress the global environment, even while the global environment no longer yields readily accessible resources. Mining and petroleum firms push into ever more challenging environments (e.g., deep mines and seabed mining) to meet resource demands. These activities are expensive, and resource prices rise rapidly, further excluding groups that cannot pay for these resources. The impact of large-scale systems on society leads to mass idleness, with the accompanying threat of violent reaction as unemployed masses seek to blame both people in power as well as the broader social structure for their plight. Perhaps, the impact of large-scale systems on society has already eroded essential qualities of humanness. Humans, when they feel “socially useless,” are dehumanized. (At the same time, machines (at any scale) seem incapable of emotion or empathy.) Has the cost of technological progress been too high to pay? These issues are addressed in this paper. (shrink)
The aim of my contribution is to try to analyse some points of similarity and difference between post-Parsonian social systems theory models for sociology — with special reference to those of W. Buckley, F.E. Emery and N. Luhmann — and expert systems models1 from ArtificialIntelligence. I keep specifically to post-Parsonian systems theories within sociology because they assume some postulates and criteria derived from cybernetics and which are at the roots of AI. I refer in particular to the (...) fundamental relevance of the system-environment relationship in both sociology and AI. (shrink)
The paper identifies and assesses the implications of two approaches to the field of artificialintelligence and legal reasoning. The first — pragmatism — concentrates on the development of working systems to the exclusion of theoretical problems. The second — purism — focuses on the nature of the law and of intelligence with no regard for the delivery of commercially viable systems. Past work in AI and law is classified in terms of this division. By reference to (...) The Latent Damage System, an operational system, the paper articulates and responds to conceivable purist (jurisprudential and AI) objections to such a program. The methods of the pragmatist are also called into question and refined. The author concludes that pragmatism within a purist framework is the only sound approach to developing reliable AI systems in law. (shrink)
Distributed ArtificialIntelligence (DAI) deals with computational systems where several intelligent components interact in a common environment. This paper is aimed at pointing out and fostering the exchange between DAI and cognitive and social science in order to deal with the issues of interaction, and in particular with the reasons and possible strategies for social behaviour in multi-agent interaction is also described which is motivated by requirements of cognitive plausibility and grounded the notions of power, dependence and help. (...) Connections with human-computer interaction are also suggested. (shrink)
The introduction of massive parallelism and the renewed interest in neural networks gives a new need to evaluate the relationship of symbolic processing and artificialintelligence. The physical symbol hypothesis has encountered many difficulties coping with human concepts and common sense. Expert systems are showing more promise for the early stages of learning than for real expertise. There is a need to evaluate more fully the inherent limitations of symbol systems and the potential for programming compared with training. (...) This can give more realistic goals for symbolic systems, particularly those based on logical foundations. (shrink)
The paper discusses the characteristics of Biological Intelligence (BI) and its differences with artificialintelligence. In particular the plasticity of the nervous system is considered in the different forms with special attention to deterministic and localizationist views of the brain vs holistic approaches. When memory and learning are considered the localizationist views do not offer a possible solution to a number of problems while memory may be better conceptualized in terms of categorization procedures and generalizing strategies. Finally, (...) the problem of individual variability, an important feature in terms of BI, is considered. The legitimacy of analogies between BI and AI is discussed and the necessity for an innovative approach to the field of AI is stressed. (shrink)
. A comparison is made between two unlikely debates over intelligence. One debate took place in 1550 at Valladolid, Spain, between Bartolomé de las Casas and Juan Gines de Sepúlveda over the intelligence of the Amerindian. The other debate is contemporary, between John Searle and various representatives of the “strong” artificialintelligence community over the adequacy of the Turing test for intelligence. Although the contemporary debate has yet to die down, the Valladolid debate has been (...) over for four hundred years. The question asked here is whether the contemporary debate can profit from the previous one. The common bond providing the basis for contrast is the issue of the “other” which is present in both debates. From this contrast, the observation is made that the question of meaning is intimately tied to the question of intelligence. (shrink)
This paper investigates how the simulation of intelligence, an activity that has been considered the notional task of ArtificialIntelligence, does not comprise its duplication. Briefly touching on the distinction between conceivability and possibility, and commenting on Ryan’s approach to fiction in terms of the interplay between possible worlds and her principle of minimal departure, we specify verisimilitude in ArtificialIntelligence as the accurate resemblance of intelligence by its simulation and, from this characterization, claim (...) the metaphysical impossibility of duplicating intelligence, as neither verisimilarly nor convincingly simulating intelligence involves its duplication. To this end, we argue by a representative case of simulation that, albeit conceivable, Turing’s test for machine intelligence wrongly equates the occurrence of indistinguishable intelligence performance to intelligence duplication, which is grounded in a prima facie conceivable but metaphysically impossible view that separates intelligence from its origin. Finally, we establish the following criterion for evaluating simulation in ArtificialIntelligence: simulations succeed in AI if and only if they are able to epistemically persuade human beings that intelligence has been duplicated, that is, if and only if verisimilar simulations can convincingly minimally depart from actual intelligence. (shrink)
One of the central factors influencing the process and the outcome of technology transfer is the nature of the technology being transferred. This paper identifies and discusses the main characteristics of ArtificialIntelligence (AI) technology from the point of view of international technology transfer. It attempts to indicate the peculiarities of AI in this context and move towards a framework to assist recipient decision makers in optimising the formulation of their policies on AI technology transfer.
Harry Collins interprets Hubert Dreyfus’s philosophy of embodiment as a criticism of all possible forms of artificialintelligence. I argue that this characterization is inaccurate and predicated upon a misunderstanding of the relevance of phenomenology for empirical scientific research.
ArtificialIntelligence has become big business in the military and in many industries. In spite of this growth there still remains no consensus about what AI really is. The major factor which seems to be responsible for this is the lack of agreement about the relationship between behavior and intelligence. In part certain ethical concerns generated from saying who, what and how intelligence is determined may be facilitating this lack of agreement.
A survey of the main approaches in a mind study -oriented part of ArtificialIntelligence is made focusing on controversial issues and extreme hypotheses. Various meanings of terms: "intelligence" and "artificialintelligence" are discussed. Limitations for constructing intelligent systems resulting from the lack of formalized models of cognitive activity are shown. The approaches surveyed are then recapitulated in the light of these limitations.
This paper presents work in progress on artificialintelligence in medicine (AIM) within the larger context of cognitive science. It introduces and develops the notion ofemergence both as an inevitable evolution of artificialintelligence towards machine learning programs and as the result of a synergistic co-operation between the physician and the computer. From this perspective, the emergence of knowledge takes placein fine in the expert's mind and is enhanced both by computerised strategies of induction and deduction, (...) and by software abilities to dialogue, co-operate and function as a cognitive extension of the physician's intellectual capabilities. The proposed methodology gives the expert a prominent role which consists, first, of faithfully enunciating the descriptive features of his medical knowledge, thus giving the computer a precise description of his own perception of basic medicine, and secondly, of painstakingly gathering patients into computerised case bases which simulate exhaustive long-term memory. The AI capacities for knowledge elaboration are then triggered, giving rise to mathematically optimal diagnoses, prognoses, or treatment protocols which the physician may then evaluate, accept, reject, or adapt at his convenience, and finally append to a knowledge base. The idea of emergence embraces many issues concerning the purpose and intent of artificialintelligence in medical practice. Particularly, we address the representation problem as it is raised by classical decisional knowledge-based systems, and develop the notions of classifiers and hybrid systems as possible answers to this problem. Finally, since the whole methodology touches the problem of technological investment in health care, now inherent in modern medical practice, some ethical considerations accompany the exposé. (shrink)
We consider a special case of heuristics, namely numeric heuristic evaluation functions, and their use in artificialintelligence search algorithms. The problems they are applied to fall into three general classes: single-agent path-finding problems, two-player games, and constraint-satisfaction problems. In a single-agent path-finding problem, such as the Fifteen Puzzle or the travelling salesman problem, a single agent searches for a shortest path from an initial state to a goal state. Two-player games, such as chess and checkers, involve an (...) adversarial relationship between two players, each trying to win the game. In a constraint-satisfaction, problem, such as the 8-Queens problem, the task is to find a state that satisfies a set of constraints. All of these problems are computationally intensive, and heuristic evaluation functions are used to reduce the amount of computation required to solve them. In each case we explain the nature of the evaluation functions used, how they are used in search algorithms, and how they can be automatically learned or acquired. (shrink)
This article examines argument structures and strategies in pro and con argumentation about the possibility of human-level artificialintelligence (AI) in the near term future. It examines renewed controversy about strong AI that originated in a prominent 1999 book and continued at major conferences and in periodicals, media commentary, and Web-based discussions through 2002. It will be argued that the book made use of implicit, anticipatory refutation to reverse prevailing value hierarchies related to AI. Drawing on Perelman and (...) Olbrechts-Tyteca's (1969) study of refutational argument, this study considers points of contact between opposing arguments that emerged in opposing loci, dissociations, and casuistic reasoning. In particular, it shows how perceptions of AI were reframed and rehabilitated through metaphorical language, reversal of the philosophical pair artificial/natural, appeals to the paradigm case, and use of the loci of quantity and essence. Furthermore, examining responses to the book in subsequent arguments indicates the topoi characteristic of the rhetoric of technology advocacy. (shrink)
In this chapter, we explore the development and importance of the connection between argumentation and artificialintelligence. Specifically, we show that the influence of argumentation on AI has occurred within a framework that is consistent with the basic approach of Pragma-Dialectics. While the pragma-dialectical approach is typically conceived of as applying primarily to argumentation occurring between human agents, we show that the basic features of this approach can consistently be applied in a virtual context, whereby the goal-directed activities (...) of, and exchanges of information between, artificial agents are regulated by procedural rules. (shrink)
Systems Theory and Scientific Philosophy constitutes a totally new approach to philosophy, the philosophy of mind and the problems of artificialintelligence, and is based upon the pioneering work in cybernetics of W. Ross Ashby. While science is humanity's attempt to know how the world works and philosophy its attempt to know why, scientific philosophy is the application of scientific techniques to questions of philosophy.