In this article the question is raised whether artificial intelligence has any psychological relevance, i.e. contributes to our knowledge of how the mind/brain works. It is argued that the psychological relevance of artificial intelligence of the symbolic kind is questionable as yet, since there is no indication that the brain structurally resembles or operates like a digital computer. However, artificial intelligence of the connectionist kind may have psychological relevance, not because the brain is a neural network, but (...) because connectionist networks exhibit operating characteristics which mimic operant behavior. Finally it is concluded that, since most of the work done so far in AI and Law is of the symbolic kind, it has as yet contributed little to our understanding of the legal mind. (shrink)
It has recently been suggested that philosophy – in particular epistemology – has a contribution to make to the analysis of criminal and military intelligence. The present article pursues this suggestion, taking three phenomena that have recently been studied by philosophers, and showing that they have important implications for the gathering and sharing of intelligence, and for the use of intelligence in the determining of military strategy. The phenomena discussed are: (1) Simpson's Paradox, (2) the distinction between (...) resiliency and reliability of data, and (3) the Causal Markov Condition. (shrink)
We human beings may not be the most admirable species on the planet, or the most likely to survive for another millennium, but we are without any doubt at all the most intelligent. We are also the only species with language. What is the relation between these two obvious facts?
Alan Turing devised his famous test (TT) through a slight modificationof the parlor game in which a judge tries to ascertain the gender of twopeople who are only linguistically accessible. Stevan Harnad hasintroduced the Total TT, in which the judge can look at thecontestants in an attempt to determine which is a robot and which aperson. But what if we confront the judge with an animal, and arobot striving to pass for one, and then challenge him to peg which iswhich? (...) Now we can index TTT to a particular animal and its syntheticcorrelate. We might therefore have TTTrat, TTTcat,TTTdog, and so on. These tests, as we explain herein, are abetter barometer of artificial intelligence (AI) than Turing's originalTT, because AI seems to have ammunition sufficient only to reach thelevel of artificial animal, not artificial person. (shrink)
This study examines factors impacting ethical behavior of 103 hospital nurses. The level of emotional intelligence and ethical behavior of peers had a significant impact on ethical behavior of nurses. Independence climate had a significant impact on ethical behavior of nurses. Other ethical climate types such as professional, caring, rules, instrumental, and efficiency did not impact ethical behavior of respondents. Implications of this study for researchers and practitioners are discussed.
The currently developing fields of Ambient Intelligence and Persuasive Technology bring about a convergence of information technology and cognitive science. Smart environments that are able to respond intelligently to what we do and that even aim to influence our behaviour challenge the basic frameworks we commonly use for understanding the relations and role divisions between human beings and technological artifacts. After discussing the promises and threats of these technologies, this article develops alternative conceptions of agency, freedom, and responsibility that (...) make it possible to better understand and assess the social roles of Ambient Intelligence and Persuasive Technology. The central claim of the article is that these new technologies urge us to blur the boundaries between humans and technologies also at the level of our conceptual and moral frameworks. (shrink)
This target article considers the relation of fluid cognitive functioning to general intelligence. A neurobiological model differentiating working memory/executive function cognitive processes of the prefrontal cortex from aspects of psychometrically defined general intelligence is presented. Work examining the rise in mean intelligence-test performance between normative cohorts, the neuropsychology and neuroscience of cognitive function in typically and atypically developing human populations, and stress, brain development, and corticolimbic connectivity in human and nonhuman animal models is reviewed and found to (...) provide evidence of mechanisms through which early experience affects the development of an aspect of cognition closely related to, but distinct from, general intelligence. Particular emphasis is placed on the role of emotion in fluid cognition and on research indicating fluid cognitive deficits associated with early hippocampal pathology and with dysregulation of the hypothalamic-pituitary-adrenal axis stress-response system. Findings are seen to be consistent with the idea of an independent fluid cognitive construct and to assist with the interpretation of findings from the study of early compensatory education for children facing psychosocial adversity and from behavior genetic research on intelligence. It is concluded that ongoing development of neurobiologically grounded measures of fluid cognitive skills appropriate for young children will play a key role in understanding early mental development and the adaptive success to which it is related, particularly for young children facing social and economic disadvantage. Specifically, in the evaluation of the efficacy of compensatory education efforts such as Head Start and the readiness for school of children from diverse backgrounds, it is important to distinguish fluid cognition from psychometrically defined general intelligence. (Published Online April 5 2006) Key Words: cognition; cognition-emotion reciprocity; developmental disorders; emotion; fluid cognition; Flynn effect; general intelligence; limbic system; neuroscience; phenylketonuria; prefrontal cortex; psychometrics; schizophrenia. (shrink)
The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers' (JCS 2010) article is the first comprehensive philosophical analysis of the singularity in a respected philosophy journal. The motivation of my article is to (...) augment Chalmers' and to discuss some issues not addressed by him, in particular what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity. (shrink)
What characterizes most technical or theoretical accounts of memory is their reliance upon an internal storage model. Psychologists and neurophysiologists have suggested neural traces (either dynamic or static) as the mechanism for this storage, and designers of artificial intelligence have relied upon the same general model, instantiated magnetically or electronically instead of neurally, to do the same job. Both psychology and artificial intelligence design have heretofore relied, without much question, upon the idea that memory is to be understood (...) as a matter of internal storage. In what follows, I shall first sketch the most important reasons for skepticism about this model, and I shall then propose an outline of an alternative way of talking about memory. This will provide an appropriate framework for suggesting a few implications for future work in artificial intelligence. (shrink)
On a literal reading of `Computing Machinery and Intelligence'', Alan Turing presented not one, but two, practical tests to replace the question `Can machines think?'' He presented them as equivalent. I show here that the first test described in that much-discussed paper is in fact not equivalent to the second one, which has since become known as `the Turing Test''. The two tests can yield different results; it is the first, neglected test that provides the more appropriate indication of (...)intelligence. This is because the features of intelligence upon which it relies are resourcefulness and a critical attitude to one''s habitual responses; thus the test''s applicablity is not restricted to any particular species, nor does it presume any particular capacities. This is more appropriate because the question under consideration is what would count as machine intelligence. The first test realizes a possibility that philosophers have overlooked: a test that uses a human''s linguistic performance in setting an empirical test of intelligence, but does not make behavioral similarity to that performance the criterion of intelligence. Consequently, the first test is immune to many of the philosophical criticisms on the basis of which the (so-called) `Turing Test'' has been dismissed. (shrink)
This paper analyzes ethical aspects of the new paradigm of Ambient Intelligence, which is a combination of Ubiquitous Computing and Intelligent User Interfaces (IUI’s). After an introduction to the approach, two key ethical dimensions will be analyzed: freedom and privacy. It is argued that Ambient Intelligence, though often designed to enhance freedom and control, has the potential to limit freedom and autonomy as well. Ambient Intelligence also harbors great privacy risks, and these are explored as well.
This paper attempts some integration of two perspectives on questions about rationality and irrationality: the classical conception of irrationality as sophism and themes from the romantic revolt against Enlightenment reason. However, since talk of "reason" and "the irrational" often invites rigid dualities of reason and its opposites (such as feeling, intuition, faith, or tradition), the paper turns to "intelligence" in place of "reason," thinking of human intelligence as something less abstract, less purely theoretical, and more firmly rooted in (...) practice, including communicative practice. "Intelligence" is "reason" naturalized. (shrink)
A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: we take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of (...) class='Hi'>intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines. (shrink)
This study investigates factors impacting perceptions of ethical conduct of peers of 293 students in four US universities. Self-reported ethical behavior and recognition of emotions in others (a dimension of emotional intelligence) impacted perception of ethical behavior of peers. None of the other dimensions of emotional intelligence were significant. Age, Race, Sex, GPA, or type of major (business versus nonbusiness) did not impact perception of ethical behavior of peers. Implications of the results of the study for business schools (...) and industry professionals are discussed. (shrink)
Abstract: In the course of seeking an answer to the question "How do you know you are not a zombie?" Floridi (2005) issues an ingenious, philosophically rich challenge to artificial intelligence (AI) in the form of an extremely demanding version of the so-called knowledge game (or "wise-man puzzle," or "muddy-children puzzle")—one that purportedly ensures that those who pass it are self-conscious. In this article, on behalf of (at least the logic-based variety of) AI, I take up the challenge—which is (...) to say, I try to show that this challenge can in fact be met by AI in the foreseeable future. (shrink)
Profiling technologies are the facilitating force behind the vision of Ambient Intelligence in which everyday devices are connected and embedded with all kinds of smart characteristics enabling them to take decisions in order to serve our preferences without us being aware of it. These technological practices have considerable impact on the process by which our personhood takes shape and pose threats like discrimination and normalisation. The legal response to these developments should move away from a focus on entitlements to (...) personal data, towards making transparent and controlling the profiling process by which knowledge is produced from these data. The tendency in intellectual property law to commodify information embedded in software and profiles could counteract this shift to transparency and control. These rights obstruct the access and contestation of the design of the code that impacts one’s personhood. This triggers a political discussion about the public nature of this code and forces us to rethink the relations between property, privacy and personhood in the digital age. (shrink)
Gödel's Theorem is often used in arguments against machine intelligence, suggesting humans are not bound by the rules of any formal system. However, Gödelian arguments can be used to support AI, provided we extend our notion of computation to include devices incorporating random number generators. A complete description scheme can be given for integer functions, by which nonalgorithmic functions are shown to be partly random. Not being restricted to algorithms can be accounted for by the availability of an arbitrary (...) random function. Humans, then, might not be rule-bound, but Gödelian arguments also suggest how the relevant sort of nonalgorithmicity may be trivially made available to machines. (shrink)
Since the introduction of the imitation game by Turing in 1950 there has been much debate as to its validity in ascertaining machine intelligence. We wish herein to consider a different issue altogether: granted that a computing machine passes the Turing Test, thereby earning the label of ``Turing Chatterbox'', would it then be of any use (to us humans)? From the examination of scenarios, we conclude that when machines begin to participate in social transactions, unresolved issues of trust and (...) responsibility may well overshadow any raw reasoning ability they possess. (shrink)
From some perspectives, it seems obvious that emotions and feelings must be both reasonable and morally significant: from others, it may seem as obvious that they cannot be. This paper seeks to advance discussion of ethical implications of the currently contested issue of the relationship of reason to feeling and emotion via reflection upon various examples of affectively charged moral dilemma. This discussion also proceeds by way of critical consideration of recent empirical enquiry into these issues in the literature of (...) so-called emotional intelligence. In this regard, despite ambiguities in their accounts of the relationship of reason to emotion, advocates of emotional intelligence generally incline to therapeutic conceptions of emotional health which are not inconsistent with currently fashionable cognitivist accounts of feeling and emotion. All the same, it is arguable that therapeutic or other strategies which overplay the possibility of cognitive or other resolution of emotional conflict are prey to certain difficulties. First, they underemphasise those passive but identity-constitutive aspects of affect which are not obviously rationally accountable. Secondly, they insufficiently recognise the extent to which emotional conflicts can be significantly implicated in moral diversity. In view of either or both of these points, they may fail to appreciate the moral inappropriateness of attempts to resolve certain forms of emotional conflict or tension. (shrink)
This research investigates the efficacy of business ethics intervention, tests a theoretical model that the love of money is directly or indirectly related to propensity to engage in unethical behavior (PUB), and treats college major (business vs. psychology) and gender (male vs. female) as moderators in multi-group analyses. Results suggested that business students who received business ethics intervention significantly changed their conceptions of unethical behavior and reduced their propensity to engage in theft; while psychology students without intervention had no such (...) changes. Therefore, ethics training had some impacts on business students' learning and education (intelligence). For our theoretical model, results of the whole sample (N = 298) revealed that Machiavellianism (measured at Time 1) was a mediator of the relationship between the love of money (measured at Time 1) and unethical behavior (measured at Time 2) (the Love of Money → Machiavellianism → Unethical Behavior). Further, this mediating effect existed for business students (n = 198) but not for psychology students (n = 100), for male students (n = 165) but not for female students (n = 133), and for male business students (n = 128) but not for female business students (n = 70). Moreover, when examined alone, the direct effect (the Love of Money → Unethical Behavior) existed for business students but not for psychology students. We concluded that a short business ethics intervention may have no impact on the issue of virtue (wisdom). (shrink)
The emotions have been one of the most fertile areas of study in psychology, neuroscience, and other cognitive disciplines. Yet as influential as the work in those fields is, it has not yet made its way to the desks of philosophers who study the nature of mind. Passionate Engines unites the two for the first time, providing both a survey of what emotions can tell us about the mind, and an argument for how work in the cognitive disciplines can help (...) us develop new ways of understanding the mind as a whole. Craig DeLancey shows that our best philosophical and scientific understanding of the emotions provides essential insights on key issues in the philosophy of mind and artificial intelligence: intentionality, aesthetics, rationality, action theory, moral psychology, consciousness, ontology and autonomy. He provides an accessible overview of the science of emotion, explaining with minimal jargon the technical issues that arise. The book also offers new ways to understand the mind, suggesting that it is autonomy--and not cognition--that should be the core problem of the philosophy of mind, cognitive science, and artificial intelligence. DeLancey argues that the philosophy of mind has been held back by an impoverished view of naturalism, and that a proper appreciation of the complexity of the sciences of mind, readily demonstrated by the science of emotion, will overcome this. Passionate Engines provides a unique, contemporary view of the link between science and philosophy, offering a bold new way of looking at the mind for scholars in a range of disciplines. Its accessible and refreshing approach will appeal to philosophers, psychologists, computer scientists, others in the cognitive disciplines, and lay people interested in the mind. (shrink)
The peculiarity of the relationship between philosophy and Artificial Intelligence (AI) has been evidenced since the advent of AI. This paper aims to put the basis of an extended and well founded philosophy of AI: it delineates a multi-layered general framework to which different contributions in the field may be traced back. The core point is to underline how in the same scenario both the role of philosophy on AI and role of AI on philosophy must be considered. Moreover, (...) this framework is revised and extended in the light of the consideration of a type of multiagent system devoted to afford the issue of scientific discovery both from a conceptual and from a practical point of view. (shrink)
Intuitive conceptions guide practice, but practice reciprocally reshapes intuition. The intuitive conception of intelligence in AI was originally highly anthropocentric. However, the internal dynamics of AI research have resulted in a divergence from anthropocentric concerns. In particular, the increasing emphasis on commonsense knowledge and peripheral intelligence (perception and movement) in effect constitutes an incipient reorientation of intuitions about the nature of intelligence in a non-anthropocentric direction. I argue that this conceptual shift undermines Joseph Weizenbaum's claim that the (...) project of artificial intelligence is inherently dehumanizing. (shrink)
Harry Collins interprets Hubert Dreyfus’s philosophy of embodiment as a criticism of all possible forms of artificial intelligence. I argue that this characterization is inaccurate and predicated upon a misunderstanding of the relevance of phenomenology for empirical scientific research.
The Turing Test (TT) is criticised for various reasons, one being that it is limited to testing only human-like intelligence. We can read, for example, that âTT is testing humanity, not intelligence,â (Fostel, 1993), that TT is âa test for human intelligence, not intelligence in general,â (French, 1990), or that a perspective assumed by TT is parochial, arrogant and, generally, âmassively anthropocentricâ (Hayes and Ford, 1996). This limitation presumably causes a basic inadequacy of TT, namely that (...) it misses a wide range of intelligence by focusing on one possibility only, namely on human intelligence. The spirit of TT enforces making explanations of possible machine intelligence in terms of what is known about intelligence in humans, thus possible specificity of the computer intelligence is ruled out from the oÃ¦lset. (shrink)
The aims of this paper are threefold: To show that game-playing (GP), the discipline of Artificial Intelligence (AI) concerned with the development of automated game players, has a strong epistemological relevance within both AI and the vast area of cognitive sciences. In this context games can be seen as a way of securely reducing (segmenting) real-world complexity, thus creating the laboratory environment necessary for testing the diverse types and facets of intelligence produced by computer models. This paper aims (...) to promote the belief that games represent an excellent tool for the project of computational psychology (CP). To underline how, despite this, GP has mainly adopted an engineering-inspired methodology and in doing so has distorted the framework of cognitive functionalism. Many successes (i.e. chess, checkers) have been achieved refusing human-like reasoning. The AI has appeared to work well despite ignoring an intrinsic motivation, that of creating an explanatory link between machines and mind. To assert that substantial improvements in GP may be obtained in the future only by renewed interest in human-inspired models of reasoning and in other cognitive studies. In fact, if we increase the complexity of games (from NP-Completeness to AI-Completeness) in order to reproduce real-life problems, computer science techniques enter an impasse. Many of AI’s recent GP experiences can be shown to validate this. The lack of consistent philosophical foundations for cognitive AI and the minimal philosophical commitment of AI investigation are two of the major reasons that play an important role in explaining why CP has been overlooked. (shrink)
Ambient Intelligence provides the potential for vast and varied applications, bringing with it both promise and peril. The development of Ambient Intelligence applications poses a number of ethical and legal concerns. Mobile devices are increasingly evolving into tools to orientate in and interact with the environment, thus introducing a user-centric approach to Ambient Intelligence. The MINAmI (Micro-Nano integrated platform for transverse Ambient Intelligence applications) FP6 research project aims at creating core technologies for mobile device based Ambient (...)Intelligence services. In this paper we assess five scenarios that demonstrate forthcoming MINAmI-based applications focusing on healthcare, assistive technology, homecare, and everyday life in general. A legal and ethical analysis of the scenarios is conducted, which reveals various conflicting interests. The paper concludes with some thoughts on drafting ethical guidelines for Ambient Intelligence applications. (shrink)
This article examines argument structures and strategies in pro and con argumentation about the possibility of human-level artificial intelligence (AI) in the near term future. It examines renewed controversy about strong AI that originated in a prominent 1999 book and continued at major conferences and in periodicals, media commentary, and Web-based discussions through 2002. It will be argued that the book made use of implicit, anticipatory refutation to reverse prevailing value hierarchies related to AI. Drawing on Perelman and Olbrechts-Tyteca's (...) (1969) study of refutational argument, this study considers points of contact between opposing arguments that emerged in opposing loci, dissociations, and casuistic reasoning. In particular, it shows how perceptions of AI were reframed and rehabilitated through metaphorical language, reversal of the philosophical pair artificial/natural, appeals to the paradigm case, and use of the loci of quantity and essence. Furthermore, examining responses to the book in subsequent arguments indicates the topoi characteristic of the rhetoric of technology advocacy. (shrink)
This study examines the influence of ethics instruction, religiosity, and intelligence on cheating behavior. A sample of 230 upper level, undergraduate business students had the opportunity to increase their chances of winning money in an experimental situation by falsely reporting their task performance. In general, the results indicate that students who attended worship services more frequently were less likely to cheat than those who attended worship services less frequently, but that students who had taken a course in business ethics (...) were no less likely to cheat than students who had not taken such a course. However, the results do indicate that the extent to which taking a business ethics course influenced cheating behavior was moderated by the religiosity and intelligence of the individual student. In particular, while students who were highly religious were unlikely to cheat whether or not they had taken a business ethics course, students who were not highly religious demonstrated less cheating if they had taken a business ethics course. In addition, the extent of cheating among highly intelligent students was significantly reduced if such students had taken a course in business ethics. Likewise, individuals who were highly intelligent displayed significantly less cheating if they were also highly religious. The implications of these findings are discussed. (shrink)
Artificial Intelligence has become big business in the military and in many industries. In spite of this growth there still remains no consensus about what AI really is. The major factor which seems to be responsible for this is the lack of agreement about the relationship between behavior and intelligence. In part certain ethical concerns generated from saying who, what and how intelligence is determined may be facilitating this lack of agreement.
Recent work in artificial intelligence has increasingly turned to argumentation as a rich, interdisciplinary area of research that can provide new methods related to evidence and reasoning in the area of law. Douglas Walton provides an introduction to basic concepts, tools and methods in argumentation theory and artificial intelligence as applied to the analysis and evaluation of witness testimony. He shows how witness testimony is by its nature inherently fallible and sometimes subject to disastrous failures. At the same (...) time such testimony can provide evidence that is not only necessary but inherently reasonable for logically guiding legal experts to accept or reject a claim. Walton shows how to overcome the traditional disdain for witness testimony as a type of evidence shown by logical positivists, and the views of trial sceptics who doubt that trial rules deal with witness testimony in a way that yields a rational decision-making process. (shrink)
Numerous and diverse reports indicate the efficacy of shamanic plant adjuncts (e.g., iboga, ayahuasca, psilocybin) for the care and treatment of addiction, post-traumatic stress disorder, cancer, cluster headaches, and depression. This article reports on a first-person healing of lifelong asthma and atopic dermatitis in the shamanic context of the contemporary Peruvian Amazon and the sometimes digital ontology of online communities. The article suggests that emerging language, concepts, and data drawn from the sciences of plant signaling and behavior regarding “plant (...) class='Hi'>intelligence” provide a useful heuristic framework for comprehending and actualizing the healing potentials of visionary plant “entheogens” (Wasson 1971) as represented both through first-person experience and online reports. Together with the paradigms and practices of plant signaling, biosemiotics provides a robust and coherent map for contextualizing the often reported experience of plant communication with ayahuasca and other entheogenic plants. The archetype of the “plant teachers” (called Doctores in the upper Amazon) is explored as a means for organizing and interacting with this data within an epistemology of the “hallucination/perception continuum (Fischer 1975). “Ecodelic” is offered as a new linguistic interface alongside “entheogen” (Wasson 1971). (shrink)
This article argues that existing systems on the Web cannot approach human-level intelligence, as envisioned by Descartes, without being able to achieve genuine problem solving on unseen problems. The article argues that this entails committing to a strong intensional logic. In addition to revising extant arguments in favor of intensional systems, it presents a novel mathematical argument to show why extensional systems can never hope to capture the inherent complexity of natural language. The argument makes its case by focusing (...) on representing, with increasing degrees of complexity, knowledge in a first-order language. Nevertheless, the attempts at representation fail to achieve consistency, making the case for an intensional representation system for natural language clear. (shrink)
Although activity aimed at the construction of artificial intelligence started about 60 years ago however, contemporary intelligent systems are effective in very narrow domains only. One of the reasons for this situation appears to be serious problems in the theory of intelligence. Intelligence is a characteristic of goal-directed systems and two classes of goal-directed systems can be derived from observations on animals and humans, one class is systems with innately and jointly determined goals and means. The other (...) class contains systems that are able to construct arbitrary goals and means. It is suggested that the classes (that implicitly underlie most models of artificial intelligence) are insufficient to explain human goal-directed activity. A broader approach to goal-directed systems is considered. This approach suggests that humans are goal-directed systems that jointly synthesize arbitrary goals and means. Neural and psychological data favoring this hypothesis and its experimental validation are considered. A simple computer model based on the idea of joint synthesis to simulate goal-directed activity is presented. The usage of the idea of joint synthesis for the construction of artificial intelligence is discussed. (shrink)
One of the central factors influencing the process and the outcome of technology transfer is the nature of the technology being transferred. This paper identifies and discusses the main characteristics of Artificial Intelligence (AI) technology from the point of view of international technology transfer. It attempts to indicate the peculiarities of AI in this context and move towards a framework to assist recipient decision makers in optimising the formulation of their policies on AI technology transfer.
This paper describes a study of the effects of two acts of social intelligence, namely mimicry and social praise, when used by an artificial social agent. An experiment ( N = 50) is described which shows that social praise—positive feedback about the ongoing conversation—increases the perceived friendliness of a chat-robot. Mimicry—displaying matching behavior—enhances the perceived intelligence of the robot. We advice designers to incorporate both mimicry and social praise when their system needs to function as a social actor. (...) Different ways of implementing mimicry and praise by artificial social actors in an ambient persuasive scenario are discussed. (shrink)
During the 1950s, there was a burst of enthusiasm about whether artificial intelligence might surpass human intelligence. Since then, technology has changed society so dramatically that the focus of study has shifted toward society’s ability to adapt to technological change. Technology and rapid communications weaken the capacity of society to integrate into the broader social structure those people who have had little or no access to education. (Most of the recent use of communications by the excluded has been (...) disruptive, not integrative.) Interweaving of socioeconomic activity and large-scale systems had a dehumanizing effect on people excluded from social participation by these trends. Jobs vanish at an accelerating rate. Marketing creates demand for goods which stress the global environment, even while the global environment no longer yields readily accessible resources. Mining and petroleum firms push into ever more challenging environments (e.g., deep mines and seabed mining) to meet resource demands. These activities are expensive, and resource prices rise rapidly, further excluding groups that cannot pay for these resources. The impact of large-scale systems on society leads to mass idleness, with the accompanying threat of violent reaction as unemployed masses seek to blame both people in power as well as the broader social structure for their plight. Perhaps, the impact of large-scale systems on society has already eroded essential qualities of humanness. Humans, when they feel “socially useless,” are dehumanized. (At the same time, machines (at any scale) seem incapable of emotion or empathy.) Has the cost of technological progress been too high to pay? These issues are addressed in this paper. (shrink)
In recent years there has been a substantial amount of research on emotional intelligence (EI) across a wide range of disciplines. Also, this term has been receiving increasing attention in the popular business press. This article extends previous research by seeking to determine whether there is a relationship between emotional intelligence and ethical judgment among practicing managers with respect to questions of ethical nature that can arise in their professional activity. It analyzes the results of a survey of (...) 324 managers enrolled in executive MBA programs from five universities in the southeastern and northeastern United States. This study is based on a model presented by Forsyth showing two dimensions that play an important role in ethical evaluation and behavior. Respondents were classified into one of four groups according to their idealism and relativism levels—situationists, subjectivists, absolutists, and exceptionists. The four ideological group’s scores were compared. The results indicate significant differences between the situationists and absolutists on the one hand, and subjectivists and exceptionists on the other. The former’s emotional intelligence scores were significantly higher thus demonstrating a strong relationship between emotional intelligence and ethical ideology. The results raise important implications for practitioners and educators. (shrink)
This paper introduces a concept called task muddiness as a metric for higher intelligence. Task muddiness is meant to be inclusive and expendable in nature. The intelligence required to execute a task is measured by the composite muddiness of the task described by multiple muddiness factors. The composite muddiness explains why many challenging tasks are muddy and why autonomous mental development is necessary for muddy tasks. It facilitates better understanding of intelligence, what the human adult mind can (...) do, and how to build a machine to acquire higher intelligence. The task-muddiness indicates a major reason why a higher biological mind is autonomously developed from autonomous, simple-to-complex experience. The paper also discusses some key concepts that are necessary for understanding the mind and intelligence, such as intelligence metrics, the mode a task is conveyed to the task executor, a human and a machine being a joint task performer in the traditional artificial intelligence (AI), a developmental agent (human or machine) being a sole task performer, and the need for autonomy in task-nonexplicit learning. (shrink)
In this contribution we will explore some of the implications of the vision of Ambient Intelligence (AmI) for law and legal philosophy. AmI creates an environment that monitors and anticipates human behaviour with the aim of customised adaptation of the environment to a personâs inferred preferences. Such an environment depends on distributed human and non-human intelligence that raises a host of unsettling questions around causality, subjectivity, agency and (criminal) liability. After discussing the vision of AmI we will present (...) relevant research in the field of philosophy of technology, inspired by the post-phenomenological position taken by Don Ihde and the constructivist realism of Bruno Latour. We will posit the need to conceptualise technological normativity in comparison with legal normativity, claiming that this is necessary to develop democratic accountability for the implications of emerging technologies like AmI. Lastly we will investigate to what extent technological devices and infrastructures can and should be used to achieve compliance with the criminal law, and we will discuss some of the implications of non-human distributed intelligence for criminal liability. (shrink)
Skilled cooperative action means being able to understand the communicative situation and know how and when to respond appropriately for the purpose at hand. This skill is of the performance of knowledge in co-action and is a form of social intelligence for sustainable interaction. Social intelligence, here, denotes the ability of actors and agents to manage their relationships with each other. Within an environment we have people, tools, artefacts and technologies that we engage with. Let us consider all (...) of these as dynamic representations of knowledge. When this knowledge becomes enacted, i.e., when we understand how to use it to communicate effectively, such that it becomes invisible to us, it becomes knowledge in co-action. A challenge of social intelligence design is to create mediating interfaces that can become invisible to us, i.e., as an extension of ourselves. In this paper, we present a study of the way people use surfaces that afford graphical interaction, in collaborative design tasks, in order to inform the design of intelligent user interfaces. This is a descriptive study rather than a usability study, to explore how size, orientation and horizontal and vertical positioning, influences the functionality of the surface in a collaborative setting. (shrink)
This paper presents an analysis of three major contests for machine intelligence. We conclude that a new era for Turing’s test requires a fillip in the guise of a committed sponsor, not unlike DARPA, funders of the successful 2007 Urban Challenge.