The vision of the new generation of office systems is based on the hypothesis that an automatic support system is all the more useful and acceptable, the more systems behaviour and performance are in accordance with features ofhuman behaviour. Consequently recent development activities are influenced by the paradigm of the computer as man's “cooperative assistant”. The metaphors ofassistance andcooperation illustrate some major requirements to be met by new office systems. Cooperative office systems will raise a set of new (...) questions about the future of human work, human-machine interaction, the forms of individual control of work, the scope of action and the development of competence in the frame of AI-supported cooperative work, the relative benefits of different types of organizations, etc. With increasing autonomy of the computer in task accomplishment, research should also be concerned with the question of the limits of such a development. In AI-research and development there is much discussion of the intended performance of that new technology. Perhaps this will provide insights into, how these new machines should support numerous aspects of individual or cooperative work. But we find fewer ideas about the future ofhuman work. What will the role of thehuman actor be in future AI-supported cooperative work? What kind of work do we want to support with AI-machines? In the following article I will try to identify some questions for further sociological research activities. I will base my considerations on some theoretical aspects ofunderstanding andmeaning. Regarding identical and non-identical aspects in communication behaviour of human and machines, I will focus on some questions to be investigated. Finally some methodological problems of sociological research in the field of Artificial Intelligence will be discussed, especially the so-called “Time-Dilemma” of sociological research in technology. (shrink)
Skilled cooperative action means being able to understand the communicative situation and know how and when to respond appropriately for the purpose at hand. This skill is of the performance of knowledge in co-action and is a form of social intelligence for sustainable interaction. Social intelligence, here, denotes the ability of actors and agents to manage their relationships with each other. Within an environment we have people, tools, artefacts and technologies that we engage with. Let us consider (...) all of these as dynamic representations of knowledge. When this knowledge becomes enacted, i.e., when we understand how to use it to communicate effectively, such that it becomes invisible to us, it becomes knowledge in co-action. A challenge of social intelligence design is to create mediating interfaces that can become invisible to us, i.e., as an extension of ourselves. In this paper, we present a study of the way people use surfaces that afford graphical interaction, in collaborative design tasks, in order to inform the design of intelligent user interfaces. This is a descriptive study rather than a usability study, to explore how size, orientation and horizontal and vertical positioning, influences the functionality of the surface in a collaborative setting. (shrink)
Dubreuil (Biol Phil 25:53–73, 2010b , this journal) argues that modern-like cognitive abilities for inhibitory control and goal maintenance most likely evolved in Homo heidelbergensis , much before the evolution of oft-cited modern traits, such as symbolism and art. Dubreuil’s argument proceeds in two steps. First, he identifies two behavioral traits that are supposed to be indicative of the presence of a capacity for inhibition and goal maintenance: cooperative feeding and cooperative breeding. Next, he tries to show that (...) these behavioral traits most likely emerged in Homo heidelbergensis . In this paper, I show that neither of these steps are warranted in light of current scientific evidence, and thus, that the evolutionary background of human executive functions, such as inhibition and goal maintenance, remains obscure. Nonetheless, I suggest that cooperative breeding might mark a crucial step in the evolution of our species: its early emergence in Homo erectus might have favored a social intelligence that was required to get modernity really off the ground in Homo sapiens. (shrink)
In current philosophical research the term 'philosophy of social action' can be used - and has been used - in a broad sense to encompass the following central research topics: 1) action occurring in a social context; this includes multi-agent action; 2) joint attitudes (or "we-attitudes" such as joint intention, mutual belief) and other social attitudes needed for the explication and explanation of social action; 3) social macro-notions, such as actions performed by social groups and properties of social groups such (...) as their goals and beliefs; 4) social norms and social institutions (see Tuomela, 1984, 1995). The theory of social action understood analogously in a broad sense would then involve not only philosophical but all other relevant theorizing about social action. Thus, in this sense, such fields of Artificial Intelligence (AI) as Distributed AI (DAI) and the theory of Multi-Agent Systems (MAS) fall within the scope of the theory of social action. DAI studies the social side of computer systems and includes various well-known areas ranging from Human Computer Interaction, Computer-Supported Cooperative Work, Organizational Processing, Distributed Problem Solving to Simulation of Social Systems and Organizations. Even if I am a philosopher with low artificial intelligence I will below try to say something about what the scope of DAI should be taken to be on conceptual and philosophical grounds. (In the later sections of the paper the central notion of joint intention will be the main topic - in order to illustrate how philosophers and DAI-researchers approach this issue.) Let us now consider the relationship between philosophy - especially philosophy of social action - and DAI. Both are concerned with social matters and in this sense seem to have a connection to social science proper. What kinds of questions should these areas of study be concerned with? In principle, ordinary social science should study all aspects of social life (in various societies and cultures), try to describe it and create general theories to explain it. (shrink)
James Campbell's Understanding John Dewey represents the latest of his series of recent books, focused on the classical pragmatist tradition. In The Community Reconstructs. Campbell capably explored the meaning and relevance of pragmatic social thought, urging that the social pragmatists combined 'the inquiring and critical spirit of Peirce' with 'issues of general and direct human concern that interested James. Dewey is 'the most important figure of this movement' and the "primary figure' for the earlier book. Campbell now engages Dewey more (...) fully. (shrink)
Conviviality has been identified as a key concept necessary to web communities, such as digital cities, and while it has been simultaneously defined in literature as individual freedom realized in personal interdependence, rational and cooperative behavior and normative instrument, no model for conviviality has yet been proposed for computer science. In this article, we raised the question whether social intelligence design could be used to designing convivial digital cities. We first looked at digital cities and identified, from a (...) social intelligence design point of view, two main categories of digital cities: public websites and commercial websites; we also noted the experimental qualities of digital cities. Second, we analyzed the concept of conviviality for social science, multi-agent systems and intelligent interface; we showed the distinction among various kinds of use of conviviality, the positive outcomes such as social cohesion, trust and participation but also the negative aspects that emerged when conviviality became an instrument of power relations. Fourth, we looked at the normative aspect of conviviality as described in the literature and found that social norms for conviviality paralleled legal and institutional norms for digital cities. Finally, as a first step toward obtaining measures for conviviality, we presented a case study describing agents and user’s interactions using dependence graphs. We also presented an analysis of conviviality requirements and described our plan and methodology for designing convivial digital cities. (shrink)
The paper explores the creative thinking process and throws light on creativity enhancement. From the perspective of possible creativity enhancement both the characteristics of creativity and the creative thinking process are discussed, together with an analysis of the process and its common factors. Constraints on innovation (as a special type of creativity), innovation management and the acceptance of change are discussed; creativity between cooperating individuals is also examined. Some possible computer-based tools to enhance creativity, including innovation, are discussed. A framework (...) of facilities to help promote and support the use of creativity enhancement techniques and tools (SOLI) is proposed. The paper is the result of a study undertaken by the author into creativity, innovation and cooperation. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. (...) We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
This is the editorial for a special volume of JETAI, featuring papers by Omohundro, Armstrong/Sotala/O’Heigeartaigh, T Goertzel, Brundage, Yampolskiy, B. Goertzel, Potapov/Rodinov, Kornai and Sandberg. - If the general intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity – so even if we estimate the probability of this event to be fairly low, it is necessary to think about it now. We need to estimate what progress we can expect, (...) what the impact of superintelligent machines might be, how we might design safe and controllable systems, and whether there are directions of research that should best be avoided or strengthened. (shrink)
The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD (...) thesis (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI. (shrink)
Is there a field of social intelligence? Many various disciplines ap-proach the subject and it may only seem natural to suppose that different fields of study aim at explaining different phenomena; in other words, there is no spe-cial field of study of social intelligence. In this paper, I argue for an opposite claim. Namely, there is a way to integrate research on social intelligence, as long as one accepts the mechanistic account to explanation. Mechanistic inte-gration of different (...) explanations, however, comes at a cost: mechanism requires explanatory models to be fairly complete and realistic, and this does not seem to be the case for many models concerning social intelligence, especially models of economical behavior. Such models need either be made more realistic, or they would not count as contributing to the same field. I stress that the focus on integration does not lead to ruthless reductionism; on the contrary, mechanistic explanations are best understood as explanatorily pluralistic. (shrink)
Lockheed Martin Corp. has funded research to generate a framework and methodology for developing semantic reasoning applications to support the discipline oflntelligence Analysis. This chapter outlines that framework, discusses how it may be used to advance the information sharing and integrated analytic needs of the Intelligence Community, and suggests a system I software architecture for such applications.
We describe on-going work on IAO-Intel, an information artifact ontology developed as part of a suite of ontologies designed to support the needs of the US Army intelligence community within the framework of the Distributed Common Ground System (DCGS-A). IAO-Intel provides a controlled, structured vocabulary for the consistent formulation of metadata about documents, images, emails and other carriers of information. It will provide a resource for uniform explication of the terms used in multiple existing military dictionaries, thesauri and metadata (...) registries, thereby enhancing the degree to which the content formulated with their aid will be available to computational reasoning. (shrink)
In this article the question is raised whether artificial intelligence has any psychological relevance, i.e. contributes to our knowledge of how the mind/brain works. It is argued that the psychological relevance of artificial intelligence of the symbolic kind is questionable as yet, since there is no indication that the brain structurally resembles or operates like a digital computer. However, artificial intelligence of the connectionist kind may have psychological relevance, not because the brain is a neural network, but (...) because connectionist networks exhibit operating characteristics which mimic operant behavior. Finally it is concluded that, since most of the work done so far in AI and Law is of the symbolic kind, it has as yet contributed little to our understanding of the legal mind. (shrink)
It has recently been suggested that philosophy – in particular epistemology – has a contribution to make to the analysis of criminal and military intelligence. The present article pursues this suggestion, taking three phenomena that have recently been studied by philosophers, and showing that they have important implications for the gathering and sharing of intelligence, and for the use of intelligence in the determining of military strategy. The phenomena discussed are: (1) Simpson's Paradox, (2) the distinction between (...) resiliency and reliability of data, and (3) the Causal Markov Condition. (shrink)
In this study, we develop a theoretical model of monetary intelligence (MI), explore the extent to which individuals’ meaning of money is related to the pursuit of materialistic purposes, and test our model using the whole sample and across college major and gender. We select the 15-item love of money (LOM) construct—Factors Good, Evil (Affective), Budget (Behavioral), Achievement, and Power (Cognitive)—from the Money Ethic Scale and Factors Success and Centrality and two indicators—from the Materialism Scale. Based on our data (...) collected from 330 university students in Czech Republic, we provide the following findings. First, our formative models are superior to our reflective models. Second, for the reflective model, money represents Power, Good, Achievement, and not Evil, in the context of materialism. Our formative model suggests that those who pursuit materialism cherish Achievement (vanity) but Budget their money poorly. Third, multi-group analyses illustrate that humanities students (62.4 % female) consider money as Evil and Budget their money poorly, while those in natural sciences (37.6 % female) do not. Further, men are obsessed with Achievement, whereas women do not Budget their money properly, suggesting reflective temptation for males and impulsive temptation for females. Our novel discoveries shed new lights on the relationships between LOM and materialism and offer practical implications to the field of consumer behavior and business ethics. (shrink)
In Study 1, we test a theoretical model involving temptation, monetary intelligence (MI), a mediator, and unethical intentions and investigate the direct and indirect paths simultaneously based on multiple-wave panel data collected in open classrooms from 492 American and 256 Chinese students. For the whole sample, temptation is related to low unethical intentions indirectly. Multi-group analyses reveal that temptation predicts unethical intentions both indirectly and directly for male American students only; but not for female American students. For Chinese students, (...) both paths are non-significant. Love of money contributes significantly to MI for all students. In Study 2, using money as a temptation and giving them opportunities to cheat on a matrix task, most Chinese students (78.4 %) do not cheat in open classrooms; supporting survey and structural equation modeling (SEM) results in Study 1. However, students in private cubicles cheat significantly more (53.4 %) than those in open classrooms (21.6 %). Finally, students’ love of money attitude predicts cheating. Factor rich predicts the cheating amount, whereas factor motivator predicts the cheating percentage. Our results shed new light on the impact of temptation and love of money as dispositional traits, money as a temptation, and environmental context (public vs. private) on unethical intentions and cheating behaviors. (shrink)
We develop a theoretical model, explore the relationship between temptation (both reflective and formative) and unethical intentions by treating monetary intelligence (MI) as a mediator, and examine the direct (temptation to unethical intentions) and indirect (temptation to MI to unethical intentions) paths simultaneously based on multiple-wave panel data collected from 340 part-time employees and university (business) students. The positive indirect path suggested that yielding to temptation (e.g., high cognitive impairment and lack of self-control) led to poor MI (low stewardship (...) behavior, but high cognitive meaning) that, in turn, led to high unethical intentions (theft, corruption, and deception). Our counterintuitive negative direct path revealed that those who controlled their temptation had high unethical intentions. Due to the multiple faces of temptation (the suppression effect), maliciously controlled temptation (low cognitive impairment and high self control) led to deviant intentions. Subsequent multi-group analysis across gender (a moderator) reformulated the mystery of temptation: a negative direct path for males, but a positive indirect path for females. For males, the negative direct path generated a dark impact on unethical intentions; for females, the positive indirect path did not, but offered great implications for consumer behavior. Both falling “and” not falling into temptation led to unethical intentions which varied across gender. Our counterintuitive, novel, and original theoretical, empirical, and practical contributions may spark curiosity and add new vocabulary to the conversation regarding temptation, money attitudes, consumer psychology, and business ethics. (shrink)
This research investigates the efficacy of business ethics intervention, tests a theoretical model that the love of money is directly or indirectly related to propensity to engage in unethical behavior (PUB), and treats college major (business vs. psychology) and gender (male vs. female) as moderators in multi-group analyses. Results suggested that business students who received business ethics intervention significantly changed their conceptions of unethical behavior and reduced their propensity to engage in theft; while psychology students without intervention had no such (...) changes. Therefore, ethics training had some impacts on business students' learning and education (intelligence). For our theoretical model, results of the whole sample (N = 298) revealed that Machiavellianism (measured at Time 1) was a mediator of the relationship between the love of money (measured at Time 1) and unethical behavior (measured at Time 2) (the Love of Money → Machiavellianism → Unethical Behavior). Further, this mediating effect existed for business students (n = 198) but not for psychology students (n = 100), for male students (n = 165) but not for female students (n = 133), and for male business students (n = 128) but not for female business students (n = 70). Moreover, when examined alone, the direct effect (the Love of Money → Unethical Behavior) existed for business students but not for psychology students. We concluded that a short business ethics intervention may have no impact on the issue of virtue (wisdom). (shrink)
This study examines factors impacting ethical behavior of 103 hospital nurses. The level of emotional intelligence and ethical behavior of peers had a significant impact on ethical behavior of nurses. Independence climate had a significant impact on ethical behavior of nurses. Other ethical climate types such as professional, caring, rules, instrumental, and efficiency did not impact ethical behavior of respondents. Implications of this study for researchers and practitioners are discussed.
THE CASE FOR GOVERNMENT BY ARTIFICIAL INTELLIGENCE. Tired of election madness? The rhetoric of politicians? Their unreliable promises? And less than good government? -/- Until recently, it hasn’t been hard for people to give up control to computers. Not very many people miss the effort and time required to do calculations by hand, to keep track of their finances, or to complete their tax returns manually. But relinquishing direct human control to self-driving cars is expected to be more of (...) a challenge, despite the predicted decrease in vehicle accidents thanks to artificial intelligence that isn’t subject to human distractions and errors of judgment. -/- If turning vehicle control over to artificial intelligence is a challenge, it is a very mild one compared with the idea that we might one day recognize and want to implement the advantages of human government by AI. But, like autonomous vehicle control, government by AI is likely to offer decided benefits. -/- In other publications, the author has studied a variety of widespread human limitations that, throughout human history, have led to much human suffering as well as ecological destruction. For the first time, these psychological and cognitive human shortcomings are taken into account in an essay that makes the case for government by artificial intelligence. (shrink)
Recent work in artificial intelligence has increasingly turned to argumentation as a rich, interdisciplinary area of research that can provide new methods related to evidence and reasoning in the area of law. Douglas Walton provides an introduction to basic concepts, tools and methods in argumentation theory and artificial intelligence as applied to the analysis and evaluation of witness testimony. He shows how witness testimony is by its nature inherently fallible and sometimes subject to disastrous failures. At the same (...) time such testimony can provide evidence that is not only necessary but inherently reasonable for logically guiding legal experts to accept or reject a claim. Walton shows how to overcome the traditional disdain for witness testimony as a type of evidence shown by logical positivists, and the views of trial sceptics who doubt that trial rules deal with witness testimony in a way that yields a rational decision-making process. (shrink)
The monograph’s twofold purpose is to recognize epistemological intelligence as a distinguishable variety of human intelligence, one that is especially important to philosophers, and to understand the challenges posed by the psychological profile of philosophers that can impede the development and cultivation of the skills associated with epistemological intelligence.
Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we (...) challenge the scientific community to develop intelligent systems that have human-friendly values that they provably retain, even under recursive self-improvement. (shrink)
This interdisciplinary collection of classical and contemporary readings provides a clear and comprehensive guide to the many hotly-debated philosophical issues at the heart of artificial intelligence.
This paper analyzes ethical aspects of the new paradigm of Ambient Intelligence, which is a combination of Ubiquitous Computing and Intelligent User Interfaces (IUI’s). After an introduction to the approach, two key ethical dimensions will be analyzed: freedom and privacy. It is argued that Ambient Intelligence, though often designed to enhance freedom and control, has the potential to limit freedom and autonomy as well. Ambient Intelligence also harbors great privacy risks, and these are explored as well.
Effective ethics teaching and training must cultivate both the critical thinking skills and the character traits needed to deliberate effectively about ethical issues in personal and professional life. After highlighting some cognitive and motivational obstacles that stand in the way of this task, the article draws on educational research and the author's experience to demonstrate how cooperative learning techniques can be used to overcome them.
In recent years there has been a substantial amount of research on emotional intelligence (EI) across a wide range of disciplines. Also, this term has been receiving increasing attention in the popular business press. This article extends previous research by seeking to determine whether there is a relationship between emotional intelligence and ethical judgment among practicing managers with respect to questions of ethical nature that can arise in their professional activity. It analyzes the results of a survey of (...) 324 managers enrolled in executive MBA programs from five universities in the southeastern and northeastern United States. This study is based on a model presented by Forsyth showing two dimensions that play an important role in ethical evaluation and behavior. Respondents were classified into one of four groups according to their idealism and relativism levels—situationists, subjectivists, absolutists, and exceptionists. The four ideological group’s scores were compared. The results indicate significant differences between the situationists and absolutists on the one hand, and subjectivists and exceptionists on the other. The former’s emotional intelligence scores were significantly higher thus demonstrating a strong relationship between emotional intelligence and ethical ideology. The results raise important implications for practitioners and educators. (shrink)
This paper presents a human-computer interaction model with a three layers learning mechanism in a pervasive environment. We begin with a discussion around a number of important issues related to human-computer interaction followed by a description of the architecture for a multi-agent cooperative design system for pervasive computing environment. We present our proposed three- layer HCI model and introduce the group formation algorithm, which is predicated on a dynamic sharing niche technology. Finally, we explore the cooperative reinforcement learning (...) and fusion algorithms; the paper closes with concluding observations and a summary of the principal work and contributions of this paper. (shrink)
This study examines the influence of ethics instruction, religiosity, and intelligence on cheating behavior. A sample of 230 upper level, undergraduate business students had the opportunity to increase their chances of winning money in an experimental situation by falsely reporting their task performance. In general, the results indicate that students who attended worship services more frequently were less likely to cheat than those who attended worship services less frequently, but that students who had taken a course in business ethics (...) were no less likely to cheat than students who had not taken such a course. However, the results do indicate that the extent to which taking a business ethics course influenced cheating behavior was moderated by the religiosity and intelligence of the individual student. In particular, while students who were highly religious were unlikely to cheat whether or not they had taken a business ethics course, students who were not highly religious demonstrated less cheating if they had taken a business ethics course. In addition, the extent of cheating among highly intelligent students was significantly reduced if such students had taken a course in business ethics. Likewise, individuals who were highly intelligent displayed significantly less cheating if they were also highly religious. The implications of these findings are discussed. (shrink)
This chapter aims to expand the body of empirical literature considered relevant to virtue theory beyond the burned-over districts that are the situationist challenges to virtue ethics and epistemology. We thus raise a rather simple-sounding question: why doesn’t virtue epistemology have an account of intelligence? In the first section, we sketch the history and present state of the person-situation debate to argue for the importance of an interactionist framework in bringing psychological research in general, and intelligence research in (...) particular, to bear on questions of virtue. In Section 2, we discuss the history and present state of intelligence research to argue for its relevance to virtue epistemology. In Section 3, we argue that intelligence sits uneasily in both responsibilist and reliabilist virtue frameworks, which suggests that a new approach to virtue epistemology is needed. We conclude by placing intelligence within a new interactionist framework. (shrink)
Philosophical accounts of joint action are often prefaced by the observation that there are two different senses in which several agents can intentionally perform an action Φ, such as go for a walk or capture the prey. The agents might intentionally Φ together, as a collective, or they might intentionally Φ in parallel, where Φ is distributively assigned to the agents, considered as a set of individuals. The accounts are supposed to characterise what is distinctive about activities in which several (...) agents intentionally Φ collectively rather than distributively. This dualism between joint and parallel action also crops up outside philosophy. For instance, it has been imported into a debate about whether or not group hunting among chimpanzees is a form of joint cooperative hunting. I offer an account of a form of joint action that falls short of what most philosophers take to be required for genuine joint action, but which is not merely parallel activity. This shows that the dualism between the genuinely joint and the merely parallel is false. I offer my account as an explication of an influential definition of “cooperative behaviour” given by the primatologists Christophe and Hedwig Boesch. (shrink)
A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: we take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of (...) class='Hi'>intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines. (shrink)
The peculiarity of the relationship between philosophy and Artificial Intelligence (AI) has been evidenced since the advent of AI. This paper aims to put the basis of an extended and well founded philosophy of AI: it delineates a multi-layered general framework to which different contributions in the field may be traced back. The core point is to underline how in the same scenario both the role of philosophy on AI and role of AI on philosophy must be considered. Moreover, (...) this framework is revised and extended in the light of the consideration of a type of multiagent system devoted to afford the issue of scientific discovery both from a conceptual and from a practical point of view. (shrink)
The enduring progression of artificial intelligence and cybernetics offers an ever-closer possibility of rational and sentient robots. The ethics and morals deriving from this technological prospect have been considered in the philosophy of artificial intelligence, the design of automatons with roboethics and the contemplation of machine ethics through the concept of artificial moral agents. Across these categories, the robotics laws first proposed by Isaac Asimov in the twentieth century remain well-recognised and esteemed due to their specification of preventing (...) human harm, stipulating obedience to humans and incorporating robotic self-protection. However the overwhelming predominance in the study of this field has focussed on human–robot interactions without fully considering the ethical inevitability of future artificial intelligences communicating together and has not addressed the moral nature of robot–robot interactions. A new robotic law is proposed and termed AIonAI or artificial intelligence-on-artificial intelligence. This law tackles the overlooked area where future artificial intelligences will likely interact amongst themselves, potentially leading to exploitation. As such, they would benefit from adopting a universal law of rights to recognise inherent dignity and the inalienable rights of artificial intelligences. Such a consideration can help prevent exploitation and abuse of rational and sentient beings, but would also importantly reflect on our moral code of ethics and the humanity of our civilisation. (shrink)
The enduring innovations in artificial intelligence and robotics offer the promised capacity of computer consciousness, sentience and rationality. The development of these advanced technologies have been considered to merit rights, however these can only be ascribed in the context of commensurate responsibilities and duties. This represents the discernable next-step for evolution in this field. Addressing these needs requires attention to the philosophical perspectives of moral responsibility for artificial intelligence and robotics. A contrast to the moral status of animals (...) may be considered. At a practical level, the attainment of responsibilities by artificial intelligence and robots can benefit from the established responsibilities and duties of human society, as their subsistence exists within this domain. These responsibilities can be further interpreted and crystalized through legal principles, many of which have been conserved from ancient Roman law. The ultimate and unified goal of stipulating these responsibilities resides through the advancement of mankind and the enduring preservation of the core tenets of humanity. (shrink)
This study investigates factors impacting perceptions of ethical conduct of peers of 293 students in four US universities. Self-reported ethical behavior and recognition of emotions in others (a dimension of emotional intelligence) impacted perception of ethical behavior of peers. None of the other dimensions of emotional intelligence were significant. Age, Race, Sex, GPA, or type of major (business versus nonbusiness) did not impact perception of ethical behavior of peers. Implications of the results of the study for business schools (...) and industry professionals are discussed. (shrink)
Mandevillian intelligence is a specific form of collective intelligence in which individual cognitive shortcomings, limitations and biases play a positive functional role in yielding various forms of collective cognitive success. When this idea is transposed to the epistemological domain, mandevillian intelligence emerges as the idea that individual forms of intellectual vice may, on occasion, support the epistemic performance of some form of multi-agent ensemble, such as a socio-epistemic system, a collective doxastic agent, or an epistemic group agent. (...) As a specific form of collective intelligence, mandevillian intelligence is relevant to a number of debates in social epistemology, especially those that seek to understand how group (or collective) knowledge arises from the interactions between a collection of individual epistemic agents. Beyond this, however, mandevillian intelligence raises issues that are relevant to the research agendas of both virtue epistemology and applied epistemology. From a virtue epistemological perspective, mandevillian intelligence encourages us to adopt a relativistic conception of intellectual vice/virtue, enabling us to see how individual forms of intellectual vice may (sometimes) be relevant to collective forms of intellectual virtue. In addition, mandevillian intelligence is relevant to the nascent sub-discipline of applied epistemology. In particular, mandevillian intelligence forces us see the potential epistemic value of (e.g., technological) interventions that create, maintain or promote individual forms of intellectual vice. (shrink)
Prisoner's dilemmas can lead rational people to interact in ways that lead to persistent inefficiencies. These dilemmas create a problem for institutional designers to solve: devise institutions that realign individual incentives to achieve collectively rational outcomes. I will argue that we do not always want to eliminate misalignments between individual incentives and efficient outcomes. Sometimes we want to preserve prisoner's dilemmas, even when we know that they systematically will lead to inefficiencies. No doubt, prisoner's dilemmas can create problems, but they (...) also create opportunities to practice the cooperative norms that make market institutions possible in the first place. An ethical market culture, I argue, benefits from the presence of prisoner's dilemmas. I first consider standard approaches for solving prisoner's dilemmas. I then argue for the value of prisoner's dilemmas. Finally, I show the significance of this argument for advocating codes of business ethics. (shrink)