Results for 'artificial agent'

1000+ found
Order:
  1.  28
    What is It Like to Encounter an Autonomous Artificial Agent?Karsten Weber - 2013 - AI and Society 28 (4):483-489.
    Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies supports (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  2.  14
    Trust and Multi-Agent Systems: Applying the Diffuse, Default Model of Trust to Experiments Involving Artificial Agents. [REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
    We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to (...)
    Direct download (8 more)  
     
    Export citation  
     
    My bibliography   5 citations  
  3.  22
    This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.Keith W. Miller, Marty J. Wolf & Frances Grodzinsky - 2017 - Science and Engineering Ethics 23 (2):389-401.
    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. (...)
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  4. Artificial Agency, Consciousness, and the Criteria for Moral Agency: What Properties Must an Artificial Agent Have to Be a Moral Agent[REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled (...)
    Direct download (10 more)  
     
    Export citation  
     
    My bibliography   13 citations  
  5.  15
    Developing Artificial Agents Worthy of Trust: Would You Buy a Used Car From This Artificial Agent[REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research (...)
    Direct download (9 more)  
     
    Export citation  
     
    My bibliography   10 citations  
  6.  22
    Two Acts of Social Intelligence: The Effects of Mimicry and Social Praise on the Evaluation of an Artificial Agent[REVIEW]Maurits Kaptein, Panos Markopoulos, Boris Ruyter & Emile Aarts - 2011 - AI and Society 26 (3):261-273.
    This paper describes a study of the effects of two acts of social intelligence, namely mimicry and social praise, when used by an artificial social agent. An experiment ( N = 50) is described which shows that social praise—positive feedback about the ongoing conversation—increases the perceived friendliness of a chat-robot. Mimicry—displaying matching behavior—enhances the perceived intelligence of the robot. We advice designers to incorporate both mimicry and social praise when their system needs to function as a social actor. (...)
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography  
  7.  26
    A Minimalist Model of the Artificial Autonomous Moral Agent (AAMA).Ioan Muntean & Don Howard - 2016 - In SSS-16 Symposium Technical Reports. Association for the Advancement of Artificial Intelligence. AAAI.
    This paper proposes a model for an artificial autonomous moral agent (AAMA), which is parsimonious in its ontology and minimal in its ethical assumptions. Starting from a set of moral data, this AAMA is able to learn and develop a form of moral competency. It resembles an “optimizing predictive mind,” which uses moral data (describing typical behavior of humans) and a set of dispositional traits to learn how to classify different actions (given a given background knowledge) as morally (...)
    Direct download  
    Translate
     
     
    Export citation  
     
    My bibliography  
  8.  3
    Is It Possible to Grow an I–Thou Relation with an Artificial Agent? A Dialogistic Perspective.Trausan-Matu Stefan - forthcoming - AI and Society.
  9.  16
    Two Acts of Social Intelligence: The Effects of Mimicry and Social Praise on the Evaluation of an Artificial Agent.Maurits Kaptein, Panos Markopoulos, Boris de Ruyter & Emile Aarts - 2011 - AI and Society 26 (3):261-273.
  10.  14
    Artificial Intelligence as a Discursive Practice: The Case of Embodied Software Agent Systems. [REVIEW]Sean Zdenek - 2003 - AI and Society 17 (3-4):340-363.
    In this paper, I explore some of the ways in which Artificial Intelligence (AI) is mediated discursively. I assume that AI is informed by an “ancestral dream” to reproduce nature by artificial means. This dream drives the production of “cyborg discourse”, which hinges on the belief that human nature (especially intelligence) can be reduced to symbol manipulation and hence replicated in a machine. Cyborg discourse, I suggest, produces AI systems by rhetorical means; it does not merely describe AI (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  11.  7
    Pollock on Token Physicalism, Agent Materialism and Strong Artificial Intelligence.Dale Jacquette - 1993 - International Studies in the Philosophy of Science 7 (2):127 – 140.
    Abstract An examination of John Pollock's theory of artificial intelligence and philosophy of mind raises difficulties for his mechanist concept of person. Token physicalism, agent materialism, and strong artificial intelligence are so related that if the first two propositions are not well?established, then there is no justification for believing that an artificial consciousness can be designed and built. Pollock's arguments are shown to be inconclusive in upholding a functionalist theory of persons as supervenient but purely physical (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  12.  1
    Analysis of Foreign Exchange Interventions by Intervention Agent with an Artificial Market Approach.Hiroki Matsui & Satoshi Tojo - 2005 - Transactions of the Japanese Society for Artificial Intelligence 20:36-45.
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  13. Artificial Moral Agents Are Infeasible with Foreseeable Technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  14.  56
    The Ethics of Designing Artificial Agents.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2008 - Ethics and Information Technology 10 (2-3):115-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such (...)
    Direct download (10 more)  
     
    Export citation  
     
    My bibliography   9 citations  
  15.  60
    Modelling Trust in Artificial Agents, A First Step Toward the Analysis of E-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis (...)
    Direct download (13 more)  
     
    Export citation  
     
    My bibliography   20 citations  
  16.  16
    The Ethical Principles for the Development of Artificial Moral Agent - Focusing on the Top-Down Approach -. 최현철, 신현주 & 변순용 - 2016 - Journal of Ethics: The Korean Association of Ethics 1 (111):31-53.
  17.  1
    A New Agent-Based Tool to Build Artificial Worlds.Pietro Terna - 2010 - In Marisa Faggini, Concetto Paolo Vinci, Antonio Abatemarco, Rossella Aiello, F. T. Arecchi, Lucio Biggiero, Giovanna Bimonte, Sergio Bruno, Carl Chiarella, Maria Pia Di Gregorio, Giacomo Di Tollo, Simone Giansante, Jaime Gil Aluja, A. I͡U Khrennikov, Marianna Lyra, Riccardo Meucci, Guglielmo Monaco, Giancarlo Nota, Serena Sordi, Pietro Terna, Kumaraswamy Velupillai & Alessandro Vercelli (eds.), Decision Theory and Choices: A Complexity Approach. Springer Verlag Italia. pp. 67--81.
  18. Optimalisation and 'Thoughtful Conjecturing' as Principles of Analytical Guidance in Social Desicion Making / S. Bruno ; Part II: Agent Based Models: A New Agent-Based Tool to Build Artificial Worlds.P. Terna - 2010 - In Marisa Faggini, Concetto Paolo Vinci, Antonio Abatemarco, Rossella Aiello, F. T. Arecchi, Lucio Biggiero, Giovanna Bimonte, Sergio Bruno, Carl Chiarella, Maria Pia Di Gregorio, Giacomo Di Tollo, Simone Giansante, Jaime Gil Aluja, A. I͡U Khrennikov, Marianna Lyra, Riccardo Meucci, Guglielmo Monaco, Giancarlo Nota, Serena Sordi, Pietro Terna, Kumaraswamy Velupillai & Alessandro Vercelli (eds.), Decision Theory and Choices: A Complexity Approach. Springer Verlag Italia.
  19.  22
    The Influence of Epistemology on the Design of Artificial Agents.M. H. Lee & N. J. Lacey - 2003 - Minds and Machines 13 (3):367-395.
    Unlike natural agents, artificial agents are, to varying extent, designed according to sets of principles or assumptions. We argue that the designers philosophical position on truth, belief and knowledge has far reaching implications for the design and performance of the resulting agents. Of the many sources of design information and background we believe philosophical theories are under-rated as valuable influences on the design process. To explore this idea we have implemented some computer-based agents with their control algorithms inspired by (...)
    Direct download (8 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  20.  93
    The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents. [REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of intelligence, (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   4 citations  
  21.  64
    The Epistemological Foundations of Artificial Agents.Nicola Lacey & M. Lee - 2003 - Minds and Machines 13 (3):339-365.
    A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of (...)
    Direct download (8 more)  
     
    Export citation  
     
    My bibliography  
  22.  15
    Distributed Artificial Intelligence From a Socio-Cognitive Standpoint: Looking at Reasons for Interaction. [REVIEW]Maria Miceli, Amedo Cesta & Paola Rizzo - 1995 - AI and Society 9 (4):287-320.
    Distributed Artificial Intelligence (DAI) deals with computational systems where several intelligent components interact in a common environment. This paper is aimed at pointing out and fostering the exchange between DAI and cognitive and social science in order to deal with the issues of interaction, and in particular with the reasons and possible strategies for social behaviour in multi-agent interaction is also described which is motivated by requirements of cognitive plausibility and grounded the notions of power, dependence and help. (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  23.  30
    Artificial Evil and the Foundation of Computer Ethics.Luciano Floridi & J. W. Sanders - 2001 - Ethics and Information Technology 3 (1):55-66.
    Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is theproduct of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product ofnonhuman agency, and so includes natural disasters such asearthquakes, floods, disease and famine; and finally, that morecomplex cases are appropriately analysed as a combination of MEand NE. Recently, as a result of developments in autonomousagents in cyberspace, a new class of interesting and (...)
    Direct download (12 more)  
     
    Export citation  
     
    My bibliography   20 citations  
  24.  40
    Artificial Agents - Personhood in Law and Philosophy.Samir Chopra - manuscript
    Thinking about how the law might decide whether to extend legal personhood to artificial agents provides a valuable testbed for philosophical theories of mind. Further, philosophical and legal theorising about personhood for artificial agents can be mutually informing. We investigate two case studies, drawing on legal discussions of the status of artificial agents. The first looks at the doctrinal difficulties presented by the contracts entered into by artificial agents. We conclude that it is not necessary or (...)
    Direct download  
     
    Export citation  
     
    My bibliography   2 citations  
  25.  16
    Belief Representation in a Deductivist Type-Free Doxastic Logic.Francesco Orilia - 1994 - Minds and Machines 4 (2):163-203.
    Konolige''s technical notion of belief based on deduction structures is briefly reviewed and its usefulness for the design of artificial agents with limited representational and deductive capacities is pointed out. The design of artificial agents with more sophisticated representational and deductive capacities is then taken into account. Extended representational capacities require in the first place a solution to the intensional context problems. As an alternative to Konolige''s modal first-order language, an approach based on type-free property theory is proposed. (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  26.  12
    Gossip-Based Self-Organising Agent Societies and the Impact of False Gossip.Sharmila Savarimuthu, Maryam Purvis, Martin Purvis & Bastin Tony Roy Savarimuthu - 2013 - Minds and Machines 23 (4):419-441.
    The objective of this work is to demonstrate how cooperative sharers and uncooperative free riders can be placed in different groups of an electronic society in a decentralised manner. We have simulated an agent-based open and decentralised P2P system which self-organises itself into different groups to avoid cooperative sharers being exploited by uncooperative free riders. This approach encourages sharers to move to better groups and restricts free riders into those groups of sharers without needing centralised control. Our approach is (...)
    Direct download (5 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  27.  5
    Neminem Laedere. An Evolutionary Agent-Based Model of the Interplay Between Punishment and Damaging Behaviours.Nicola Lettieri & Domenico Parisi - 2013 - Artificial Intelligence and Law 21 (4):425-453.
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  28.  29
    Agent-Based Computational Models and Generative Social Science.Joshua M. Epstein - 1999 - Complexity 4 (5):41-60.
  29.  4
    Alcohol Consumption Among College Students: An Agent‐Based Computational Simulation.Laura A. Garrison & David S. Babcock - 2009 - Complexity 14 (6):35-44.
  30. On the Morality of Artificial Agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of (...)
    Direct download (18 more)  
     
    Export citation  
     
    My bibliography   74 citations  
  31. One Decade of Universal Artificial Intelligence.Marcus Hutter - 2012 - In Pei Wang & Ben Goertzel (eds.), Theoretical Foundations of Artificial General Intelligence. Springer. pp. 67--88.
    The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  32.  22
    Artificial Consciousness and Artificial Ethics: Between Realism and Social Relationism.Steve Torrance - 2014 - Philosophy and Technology 27 (1):9-29.
    I compare a ‘realist’ with a ‘social–relational’ perspective on our judgments of the moral status of artificial agents (AAs). I develop a realist position according to which the moral status of a being—particularly in relation to moral patiency attribution—is closely bound up with that being’s ability to experience states of conscious satisfaction or suffering (CSS). For a realist, both moral status and experiential capacity are objective properties of agents. A social relationist denies the existence of any such objective properties (...)
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  33.  16
    Creating a Discoverer: Autonomous Knowledge Seeking Agent[REVIEW]Jan M. Zytkow - 1995 - Foundations of Science 1 (2):253-283.
    Construction of a robot discoverer can be treated as the ultimate success of automated discovery. In order to build such an agent we must understand algorithmic details of the discovery processes and the representation of scientific knowledge needed to support the automation. To understand the discovery process we must build automated systems. This paper investigates the anatomy of a robot-discoverer, examining various components developed and refined to a various degree over two decades. We also clarify the notion of autonomy (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  34. Beyond Persons: Extending the Personal/Subpersonal Distinction to Non-Rational Animals and Artificial Agents.Manuel de Pinedo-Garcia & Jason Noble - 2008 - Biology and Philosophy 23 (1):87-100.
    The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial (...)
    Direct download (6 more)  
     
    Export citation  
     
    My bibliography  
  35. A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for (...)
    Direct download (7 more)  
     
    Export citation  
     
    My bibliography   3 citations  
  36.  13
    What Is the Model of Trust for Multi-Agent Systems? Whether or Not E-Trust Applies to Autonomous Agents.Massimo Durante - 2010 - Knowledge, Technology and Policy 23 (3-4):347-366.
    A socio-cognitive approach to trust can help us envisage a notion of networked trust for multi-agent systems based on different interacting agents. In this framework, the issue is to evaluate whether or not a socio-cognitive analysis of trust can apply to the interactions between human and autonomous agents. Two main arguments support two alternative hypothesis; one suggests that only reliance applies to artificial agents, because predictability of agents’ digital interaction is viewed as an absolute value and human relation (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  37.  18
    The Concept of Umwelt Overlap and its Application to Cooperative Action in Multi-Agent Systems.Maria Isabel Aldinhas Ferreira & Miguel Gama Caldas - 2013 - Biosemiotics 6 (3):497-514.
    The present paper stems from the biosemiotic modelling of individual artificial cognition proposed by Ferreira and Caldas (2012) but goes further by introducing the concept of Umwelt Overlap. The introduction of this concept is of fundamental importance making the present model closer to natural cognition. In fact cognition can only be viewed as a purely individual phenomenon for analytical purposes. In nature it always involves the crisscrossing of the spheres of action of those sharing the same environmental bubble. Plus, (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  38.  41
    Ethics and Artificial Life: From Modeling to Moral Agents. [REVIEW]John P. Sullins - 2005 - Ethics and Information Technology 7 (3):139-148.
    Artificial Life has two goals. One attempts to describe fundamental qualities of living systems through agent based computer models. And the second studies whether or not we can artificially create living things in computational mediums that can be realized either, virtually in software, or through biotechnology. The study of ALife has recently branched into two further subdivisions, one is “dry” ALife, which is the study of living systems “in silico” through the use of computer simulations, and the other (...)
    Direct download (9 more)  
     
    Export citation  
     
    My bibliography   1 citation  
  39.  24
    Heuristic Evaluation Functions in Artificial Intelligence Search Algorithms.Richard E. Korf - 1995 - Minds and Machines 5 (4):489-498.
    We consider a special case of heuristics, namely numeric heuristic evaluation functions, and their use in artificial intelligence search algorithms. The problems they are applied to fall into three general classes: single-agent path-finding problems, two-player games, and constraint-satisfaction problems. In a single-agent path-finding problem, such as the Fifteen Puzzle or the travelling salesman problem, a single agent searches for a shortest path from an initial state to a goal state. Two-player games, such as chess and checkers, (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  40. Artificial Moral Cognition: From Functionalism to Autonomous Moral Agents.Muntean Ioan & Don Howard - forthcoming - In Powers Tom (ed.), Philosophy and Computing: Essays in epistemology, philosophy of mind, logic, and ethics. Springer.
    This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to “read” the behavior of human actors, available as collected data, and to categorize their moral behavior based on moral patterns herein. The present model is (...)
     
    Export citation  
     
    My bibliography  
  41. Fundamental Issues of Artificial Intelligence.Vincent C. Müller (ed.) - 2016 - Springer.
    [Müller, Vincent C. (ed.), (2016), Fundamental issues of artificial intelligence (Synthese Library, 377; Berlin: Springer). 570 pp.] -- This volume offers a look at the fundamental issues of present and future AI, especially from cognitive science, computer science, neuroscience and philosophy. This work examines the conditions for artificial intelligence, how these relate to the conditions for intelligence in humans and other natural agents, as well as ethical and societal problems that artificial intelligence raises or will raise. The (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography  
  42.  10
    The Problem of Machine Ethics in Artificial Intelligence.Rajakishore Nath & Vineet Sahu - forthcoming - AI and Society:1-9.
    The advent of the intelligent robot has occupied a significant position in society over the past decades and has given rise to new issues in society. As we know, the primary aim of artificial intelligence or robotic research is not only to develop advanced programs to solve our problems but also to reproduce mental qualities in machines. The critical claim of artificial intelligence advocates is that there is no distinction between mind and machines and thus they argue that (...)
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography  
  43.  18
    ConsScale: A Pragmatic Scale for Measuring the Level of Consciousness in Artificial Agents.Raul Arrabales, Agapito Ledezma & Araceli Sanchis - 2010 - Journal of Consciousness Studies 17 (3-4):3-4.
    One of the key problems the field of Machine Consciousness is currently facing is the need to accurately assess the potential level of consciousness that an artificial agent might develop. This paper presents a novel artificial consciousness scale designed to provide a pragmatic and intuitive reference in the evaluation of MC implementations. The version of ConsScale described in this work provides a comprehensive evaluation mechanism which enables the estimation of the potential degree of consciousness of most of (...)
    Direct download  
     
    Export citation  
     
    My bibliography   2 citations  
  44.  46
    Norms in Artificial Decision Making.Magnus Boman - 1999 - Artificial Intelligence and Law 7 (1):17-35.
    A method for forcing norms onto individual agents in a multi-agent system is presented. The agents under study are supersoft agents: autonomous artificial agents programmed to represent and evaluate vague and imprecise information. Agents are further assumed to act in accordance with advice obtained from a normative decision module, with which they can communicate. Norms act as global constraints on the evaluations performed in the decision module and hence no action that violates a norm will be suggested to (...)
    Direct download (6 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  45.  34
    Philosophy and Distributed Artificial Intelligence: The Case of Joint Intention.Raimo Tuomela - 1996 - In N. Jennings & G. O'Hare (eds.), Foundations of Distributed Artificial Intelligence. Wiley.
    In current philosophical research the term 'philosophy of social action' can be used - and has been used - in a broad sense to encompass the following central research topics: 1) action occurring in a social context; this includes multi-agent action; 2) joint attitudes (or "we-attitudes" such as joint intention, mutual belief) and other social attitudes needed for the explication and explanation of social action; 3) social macro-notions, such as actions performed by social groups and properties of social groups (...)
    Direct download (2 more)  
     
    Export citation  
     
    My bibliography   2 citations  
  46.  18
    Demonstrating Sensemaking Emergence in Artificial Agents: A Method and an Example.Olivier L. Georgeon & James B. Marshall - 2013 - International Journal of Machine Consciousness 5 (2):131-144.
    We propose an experimental method to study the possible emergence of sensemaking in artificial agents. This method involves analyzing the agent's behavior in a test bed environment that presents regularities in the possibilities of interaction afforded to the agent, while the agent has no presuppositions about the underlying functioning of the environment that explains such regularities. We propose a particular environment that permits such an experiment, called the Small Loop Problem. We argue that the agent's (...)
    Direct download (3 more)  
     
    Export citation  
     
    My bibliography  
  47.  36
    The Ethics of Designing Artificial Agents.S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty - 2008 - Ethics and Information Technology 10 (2-3):112-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One such (...)
    Direct download (2 more)  
    Translate
     
     
    Export citation  
     
    My bibliography  
  48. Can Artificial Systems Be Part of a Collective Action?Anna Strasser - 1st ed. 2015 - In Catrin Misselhorn (ed.), Collective Agency and Cooperation in Natural and Artificial Systems. Springer Verlag. pp. 205-218.
    To answer the question of whether artificial systems may count as agents in a collective action, I will argue that a collective action is a special kind of an action and show that the sufficient conditions for playing an active part in a collective action differ from those required for being an individual intentional agent.
     
    Export citation  
     
    My bibliography  
  49.  38
    Computer Systems: Moral Entities but Not Moral Agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other (...)
    Direct download (11 more)  
     
    Export citation  
     
    My bibliography   21 citations  
  50.  38
    Ethics and Consciousness in Artificial Agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive (...)
    Direct download (4 more)  
     
    Export citation  
     
    My bibliography   12 citations  
1 — 50 / 1000