Goal-directed problem solving as originally advocated by Herbert Simon’s means-ends analysis model has primarily shaped the course of design research on artificiallyintelligentsystems for problem-solving. We contend that there is a definite disregard of a key phase within the overall design process that in fact logically precedes the actual problem solving phase. While systems designers have traditionally been obsessed with goal-directed problem solving, the basic determinants of the ultimate desired goal state still remain to be (...) fully understood or categorically defined. We propose a rational framework built on a set of logically inter-connected conjectures to specifically recognize this neglected phase in the overall design process of intelligentsystems for practical problem-solving applications. (shrink)
Decision theory has had a long-standing history in the behavioural and social sciences as a tool for constructing good approximations of human behaviour. Yet as artificiallyintelligentsystems (AIs) grow in intellectual capacity and eventually outpace humans, decision theory becomes evermore important as a model of AI behaviour. What sort of decision procedure might an AI employ? In this work, I propose that policy-based causal decision theory (PCDT), which places a primacy on the decision-relevance of predictors and (...) simulations of agent behaviour, may be such a procedure. I compare this account to the recently-developed functional decision theory (FDT), which is motivated by similar concerns. I also address potentially counterintuitive features of PCDT, such as its refusal to condition on observations made at certain times. (shrink)
To assist in the evaluation process when determining architectures for new robots and intelligentsystems equipped with artificial emotions, it is beneficial to understand the systems that have been built previously. Other surveys have classified these systems on the basis of their technological features. In this survey paper, we present a classification system based on a model similar to that used in psychology and philosophy for theories of emotion. This makes possible a connection to thousands of (...) years of discourse on the topic of emotion. Five theories of emotion are described based on an emotion theory model proposed by Power and Dalgleish. The paper provides classifications using a model of 10 new questions, for 14 major research projects that describe implementations or designs for systems that use artificial emotions for either robotics or general artificial intelligence. We also analyze the trends in the usage of various theories and complexity changes over time. (shrink)
. In this article intelligentsystems are placed in the context of accelerated Turing machines. Although such machines are not currently a reality, the very real gains in computing power made over previous decades require us to continually reevaluate the potential of intelligentsystems. The economic theories of Adam Smith provide us with a useful insight into this question.
Barring some civilisation-ending natural or man-made catastrophe, future scientists will likely incorporate fully fledged artificiallyintelligent agents in their ranks. Their tasks will include the conjecturing, extending and testing of hypotheses. At present human scientists have a number of methods to help them carry out those tasks. These range from the well-articulated, formal and unexceptional rules to the semi-articulated rules-of-thumb and intuitive hunches. If we are to hand over at least some of the aforementioned tasks to artificially (...)intelligent agents, we need to find ways to make explicit and ultimately formal, not to mention computable, the more obscure of the methods that scientists currently employ with some measure of success in their inquiries. The focus of this talk is a problem for which the available solutions are at best semi-articulated and far from perfect. It concerns the question of how to conjecture new hypotheses or extend existing ones such that they do not save phenomena in gerrymandered or ad hoc ways. This talk puts forward a fully articulated formal solution to this problem by specifying what it is about the internal constitution of the content of a hypothesis that makes it gerrymandered or ad hoc. In doing so, it helps prepare the ground for the delegation of a full gamut of investigative duties to the artificiallyintelligent scientists of the future. (shrink)
Based on an analysis of the origins and characteristics of Intelligent Design, this essay discusses the related issues of probability and irreducible complexity. From the viewpoint of complex systems theory, I suggest that Intelligent Design is not, as certain advocates claim, the only reasonable approach for dealing with the current difficulties of evolutionary biology.
One of the most foundational and continually contested questions in the cognitive sciences is the degree to which the functional organization of the brain can be understood as modular. In its classic formulation, a module was defined as a cognitive sub-system with nine specific properties; the classic module is, among other things, domain specific, encapsulated, and implemented in dedicated neural substrates. Most of the examinations—and especially the criticisms—of the modularity thesis have focused on these properties individually, for instance by finding (...) counterexamples in which otherwise good candidates for cognitive modules are shown to lack domain specificity or encapsulation. The current paper goes beyond the usual approach by asking what some of the broad architectural implications of the modularity thesis might be, and attempting to test for these. The evidence does not favor a modular architecture for the cortex. Moreover, the evidence suggests that best way to approach the understanding of cognition is not by analyzing and modelling different functional domains in isolation from the others, but rather by looking for points of overlap in their neural implementations, and exploiting these to guide the analysis and decomposition of the functions in question. This has significant implications for the question of how to approach the design and implementation of intelligent artifacts in general, and language-using robots in particular. (shrink)
Theories of intelligence can be of use to neuroscientists if they: 1. Provide illuminating suggestions about the functional architecture of neural systems; 2. Suggest specific models of processing that neural circuits might implement. The objective of our session was to stand back and consider the prospects for this interdisciplinary exchange.
In this paper, the current AI view that emergent functionalities apply only to the study of subcognitive agents is questioned; a hypercognitive view of autonomous agents as proposed in some AI subareas is also rejected. As an alternative view, a unified theory of social interaction is proposed which allows for the consideration of both cognitive and extracognitive social relations. A notion of functional effect is proposed, and the application of a formal model of cooperation is illustrated. Functional cooperation shows the (...) role of extracognitive phenomena in the interaction of intelligent agents, thus representing a typical example of emergent functionality. (shrink)
It is highly likely that, to achieve full human–machine symbiosis, truly intelligent cognitive systems—human-like —may have to be developed first. Such systems should not only be capable of performing human-like thinking, reasoning, and problem solving, but also be capable of displaying human-like motivation, emotion, and personality. In this opinion article, I will argue that such systems are indeed possible and needed to achieve true and full symbiosis with humans. A computational cognitive architecture is used in this (...) article to illustrate, in a preliminary way, what can be achieved in this regard. It is shown that Clarion involves complex structures, representations, and mechanisms, and is capable of capturing human cognitive performance as well as human motivation, emotion, personality, and other relevant aspects. It is further argued that the cognitive architecture can enable and facilitate true human–machine symbiosis. (shrink)
Looking back on the development of computer technology, particularly in the context of manufacturing, we can distinguish three big waves of technological exuberance with a wave length of roughly 30 years: In the first wave, during the 1950s, mainframe computers at that time were conceptualized as “electronic brains” and envisaged as central control unit of an “automatic factory”. Thirty years later, during the 1980s, knowledge-based systems in computer-integrated manufacturing were adored as the computational core of the “unmanned factory”. Both (...) waves dismally stranded on the contumacies of reality. Nevertheless, again thirty years later, we now experience the departure of the “smart factory” based on networks of “artificiallyintelligent” multi-agent or “cyber-physical systems”. From the very beginning, these technological exuberances rooted in mistaken metaphors describing the artifacts and, hence, in delusions about the true nature of computer systems. The behaviour of computers is, as computing science teaches us, strictly restrained to executing computable functions by means of algorithms, it thus neither resembles the performance of a brain as part of a complex sensitive living body nor is it in any meaningful sense “knowledgeable” or “intelligent”. When the delusion of being able to implement “smart factories”, despite the countless accomplishment failures before, gains momentum anew, it appears absolutely essential to reflect on underlying misconceptions. (shrink)
An application of Narrative Knowledge Representation Language (NKRL) techniques on (declassified) ‘terrorism in Southern Philippines’ documents has been carried out in the context of the IST Parmenides project. This paper describes some aspects of this work: it is our belief, in fact, that the Knowledge Representation techniques and the Intelligent Information Retrieval tools used in this experiment can be of some interest also in an ‘Ontological Modelling of Legal Events and Legal Reasoning’ context.