Results for 'Artificial agents'

1000+ found
Order:
  1.  69
    Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  2. On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility (...)
    Direct download (17 more)  
     
    Export citation  
     
    Bookmark   256 citations  
  3.  43
    Artificial agents, good care, and modernity.Mark Coeckelbergh - 2015 - Theoretical Medicine and Bioethics 36 (4):265-277.
    When is it ethically acceptable to use artificial agents in health care? This article articulates some criteria for good care and then discusses whether machines as artificial agents that take over care tasks meet these criteria. Particular attention is paid to intuitions about the meaning of ‘care’, ‘agency’, and ‘taking over’, but also to the care process as a labour process in a modern organizational and financial-economic context. It is argued that while there is in principle (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  4. Artificial agents - personhood in law and philosophy.Samir Chopra - manuscript
    Thinking about how the law might decide whether to extend legal personhood to artificial agents provides a valuable testbed for philosophical theories of mind. Further, philosophical and legal theorising about personhood for artificial agents can be mutually informing. We investigate two case studies, drawing on legal discussions of the status of artificial agents. The first looks at the doctrinal difficulties presented by the contracts entered into by artificial agents. We conclude that it (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  5.  46
    Artificial agents among us: Should we recognize them as agents proper?Migle Laukyte - 2017 - Ethics and Information Technology 19 (1):1-17.
    In this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  6. Risk Imposition by Artificial Agents: The Moral Proxy Problem.Johanna Thoma - forthcoming - In Silja Vöneky, Philipp Kellmeyer, Oliver Müller & Wolfram Burgard (eds.), The Cambridge Handbook of Responsible Artificial Intelligence: Interdisciplinary Perspectives. Cambridge University Press.
    Where artificial agents are not liable to be ascribed true moral agency and responsibility in their own right, we can understand them as acting as proxies for human agents, as making decisions on their behalf. What I call the ‘Moral Proxy Problem’ arises because it is often not clear for whom a specific artificial agent is acting as a moral proxy. In particular, we need to decide whether artificial agents should be acting as proxies (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Modelling Trust in Artificial Agents, A First Step Toward the Analysis of e-Trust.Mariarosaria Taddeo - 2010 - Minds and Machines 20 (2):243-257.
    This paper provides a new analysis of e - trust , trust occurring in digital contexts, among the artificial agents of a distributed artificial system. The analysis endorses a non-psychological approach and rests on a Kantian regulative ideal of a rational agent, able to choose the best option for itself, given a specific scenario and a goal to achieve. The paper first introduces e-trust describing its relevance for the contemporary society and then presents a new theoretical analysis (...)
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  8. Modeling artificial agents’ actions in context – a deontic cognitive event ontology.Miroslav Vacura - 2020 - Applied ontology 15 (4):493-527.
    Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9. Artificial agents and their moral nature.Luciano Floridi - 2014 - In Peter Kroes (ed.), The moral status of technical artefacts. pp. 185–212.
    Artificial agents, particularly but not only those in the infosphere Floridi (Information – A very short introduction. Oxford University Press, Oxford, 2010a), extend the class of entities that can be involved in moral situations, for they can be correctly interpreted as entities that can perform actions with good or evil impact (moral agents). In this chapter, I clarify the concepts of agent and of artificial agent and then distinguish between issues concerning their moral behaviour vs. issues (...)
     
    Export citation  
     
    Bookmark   1 citation  
  10.  4
    Artificial agents’ explainability to support trust: considerations on timing and context.Guglielmo Papagni, Jesse de Pagter, Setareh Zafari, Michael Filzmoser & Sabine T. Koeszegi - forthcoming - AI and Society:1-14.
    Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11.  70
    Artificial agents and the expanding ethical circle.Steve Torrance - 2013 - AI and Society 28 (4):399-414.
  12. Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   55 citations  
  13.  31
    Social Cognition and Artificial Agents.Anna Strasser - 2017 - In Vincent Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin, Deutschland: Springer. pp. 106-114.
    Standard notions in philosophy of mind have a tendency to characterize socio-cognitive abilities as if they were unique to sophisticated human beings. However, assuming that it is likely that we are soon going to share a large part of our social lives with various kinds of artificial agents, it is important to develop a conceptual framework providing notions that are able to account for various types of social agents. Recent minimal approaches to socio-cognitive abilities such as mindreading (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  27
    Artificial agents in social cognitive sciences.Thierry Chaminade & Jessica K. Hodgins - 2006 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 7 (3):347-353.
  15.  14
    Social Cognition and Artificial Agents.Anna Strasser - 2017 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2017. Cham: Springer. pp. 106-114.
    Standard notions in philosophy of mind have a tendency to characterize socio-cognitive abilities as if they were unique to sophisticated human beings. However, assuming that it is likely that we are soon going to share a large part of our social lives with various kinds of artificial agents, it is important to develop a conceptual framework providing notions that are able to account for various types of social agents. Recent minimal approaches to socio-cognitive abilities such as mindreading (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  16. The ethics of designing artificial agents.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2008 - Ethics and Information Technology 10 (2-3):115-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  17.  16
    How should artificial agents make risky choices on our behalf?Johanna Thoma - 2021 - LSE Philosophy Blog.
  18.  92
    The epistemological foundations of artificial agents.Nicola Lacey & M. Lee - 2003 - Minds and Machines 13 (3):339-365.
    A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  19.  49
    This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.Keith W. Miller, Marty J. Wolf & Frances Grodzinsky - 2017 - Science and Engineering Ethics 23 (2):389-401.
    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  20.  61
    Trust and multi-agent systems: applying the diffuse, default model of trust to experiments involving artificial agents[REVIEW]Jeff Buechner & Herman T. Tavani - 2011 - Ethics and Information Technology 13 (1):39-51.
    We argue that the notion of trust, as it figures in an ethical context, can be illuminated by examining research in artificial intelligence on multi-agent systems in which commitment and trust are modeled. We begin with an analysis of a philosophical model of trust based on Richard Holton’s interpretation of P. F. Strawson’s writings on freedom and resentment, and we show why this account of trust is difficult to extend to artificial agents (AAs) as well as to (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  21. On social laws for artificial agent societies: off-line design.Yoav Shoham & Moshe Tennenholtz - 1995 - Artificial Intelligence 73 (1-2):231-252.
  22. On the Moral Equality of Artificial Agents.Christopher Wareham - 2011 - International Journal of Technoethics 2 (1):35-42.
    Artificial agents such as robots are performing increasingly significant ethical roles in society. As a result, there is a growing literature regarding their moral status with many suggesting it is justified to regard manufactured entities as having intrinsic moral worth. However, the question of whether artificial agents could have the high degree of moral status that is attributed to human persons has largely been neglected. To address this question, the author developed a respect-based account of the (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  23. Meaning in Artificial Agents: The Symbol Grounding Problem Revisited.Dairon Rodríguez, Jorge Hermosillo & Bruno Lara - 2012 - Minds and Machines 22 (1):25-34.
    The Chinese room argument has presented a persistent headache in the search for Artificial Intelligence. Since it first appeared in the literature, various interpretations have been made, attempting to understand the problems posed by this thought experiment. Throughout all this time, some researchers in the Artificial Intelligence community have seen Symbol Grounding as proposed by Harnad as a solution to the Chinese room argument. The main thesis in this paper is that although related, these two issues present different (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24.  20
    A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents.Guglielmo Papagni & Sabine Koeszegi - 2021 - Minds and Machines 31 (4):505-534.
    Artificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  14
    The epistemological foundations of artificial agents.Nick J. Lacey & M. H. Lee - 2003 - Minds and Machines 13 (3):339-365.
    A situated agent is one which operates within an environment. In most cases, the environment in which the agent exists will be more complex than the agent itself. This means that an agent, human or artificial, which wishes to carry out non-trivial operations in its environment must use techniques which allow an unbounded world to be represented within a cognitively bounded agent. We present a brief description of some important theories within the fields of epistemology and metaphysics. We then (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. The ethics of designing artificial agents.S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty - 2008 - Ethics and Information Technology 10 (2-3):112-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial (...)
     
    Export citation  
     
    Bookmark   1 citation  
  27.  30
    Demonstrating sensemaking emergence in artificial agents: A method and an example.Olivier L. Georgeon & James B. Marshall - 2013 - International Journal of Machine Consciousness 5 (2):131-144.
    We propose an experimental method to study the possible emergence of sensemaking in artificial agents. This method involves analyzing the agent's behavior in a test bed environment that presents regularities in the possibilities of interaction afforded to the agent, while the agent has no presuppositions about the underlying functioning of the environment that explains such regularities. We propose a particular environment that permits such an experiment, called the Small Loop Problem. We argue that the agent's behavior demonstrates sensemaking (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   61 citations  
  29.  1
    Hiding Behind Machines: Artificial Agents May Help to Evade Punishment.Till Feier, Jan Gogoll & Matthias Uhl - 2022 - Science and Engineering Ethics 28 (2):1-19.
    The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to machines than by (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30. The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents[REVIEW]Nick Bostrom - 2012 - Minds and Machines 22 (2):71-85.
    This paper discusses the relation between intelligence and motivation in artificial agents, developing and briefly arguing for two theses. The first, the orthogonality thesis, holds (with some caveats) that intelligence and final goals (purposes) are orthogonal axes along which possible artificial intellects can freely vary—more or less any level of intelligence could be combined with more or less any final goal. The second, the instrumental convergence thesis, holds that as long as they possess a sufficient level of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  31.  65
    What is it like to encounter an autonomous artificial agent?Karsten Weber - 2013 - AI and Society 28 (4):483-489.
    Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies supports that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  32.  64
    Can we Develop Artificial Agents Capable of Making Good Moral Decisions?: Wendell Wallach and Colin Allen: Moral Machines: Teaching Robots Right from Wrong, Oxford University Press, 2009, xi + 273 pp, ISBN: 978-0-19-537404-9.Herman T. Tavani - 2011 - Minds and Machines 21 (3):465-474.
  33. A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  34.  82
    Privacy and artificial agents, or, is google reading my email?Samir Chopra & Laurence White - manuscript
    in Proceedings of the International Joint Conference on Artificial Intelligence, 2007.
    Direct download  
     
    Export citation  
     
    Bookmark  
  35. Film. Mirrors of nature: artificial agents in real life and virtual worlds.Paul Dumouchel - 2015 - In Scott Cowdell, Chris Fleming & Joel Hodge (eds.), Mimesis, movies, and media. Bloomsbury Academic.
     
    Export citation  
     
    Bookmark  
  36.  18
    Categorization in artificial agents: Guidance on empirical research?William S.-Y. Wang & Tao Gong - 2005 - Behavioral and Brain Sciences 28 (4):511-512.
    By comparing mechanisms in nativism, empiricism, and culturalism, the target article by Steels & Belpaeme (S&B) emphasizes the influence of communicational constraint on sharing color categories. Our commentary suggests deeper considerations of some of their claims, and discusses some modifications that may help in the study of communicational constraints in both humans and robots.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  37. The influence of epistemology on the design of artificial agents.Mark Lee & Nick Lacey - 2003 - Minds and Machines 13 (3):367-395.
    Unlike natural agents, artificial agents are, to varying extent, designed according to sets of principles or assumptions. We argue that the designers philosophical position on truth, belief and knowledge has far reaching implications for the design and performance of the resulting agents. Of the many sources of design information and background we believe philosophical theories are under-rated as valuable influences on the design process. To explore this idea we have implemented some computer-based agents with their (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  38. Representation in natural and artificial agents.M. Bickhard - 1999 - In Edwina Taborsky (ed.), Semiosis. Evolution. Energy: Towards a Reconceptualization of the Sign. Shaker Verlag. pp. 15--26.
  39. Can a Robot Lie? Exploring the Folk Concept of Lying as Applied to Artificial Agents.Markus Kneer - 2021 - Cognitive Science 45 (10):e13032.
    The potential capacity for robots to deceive has received considerable attention recently. Many papers explore the technical possibility for a robot to engage in deception for beneficial purposes (e.g., in education or health). In this short experimental paper, I focus on a more paradigmatic case: robot lying (lying being the textbook example of deception) for nonbeneficial purposes as judged from the human point of view. More precisely, I present an empirical experiment that investigates the following three questions: (a) Are ordinary (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Beyond persons: extending the personal/subpersonal distinction to non-rational animals and artificial agents.Manuel de Pinedo-Garcia & Jason Noble - 2008 - Biology and Philosophy 23 (1):87-100.
    The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial Life that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  41.  33
    Learning to Manipulate and Categorize in Human and Artificial Agents.Giuseppe Morlino, Claudia Gianelli, Anna M. Borghi & Stefano Nolfi - 2015 - Cognitive Science 39 (1):39-64.
    This study investigates the acquisition of integrated object manipulation and categorization abilities through a series of experiments in which human adults and artificial agents were asked to learn to manipulate two-dimensional objects that varied in shape, color, weight, and color intensity. The analysis of the obtained results and the comparison of the behavior displayed by human and artificial agents allowed us to identify the key role played by features affecting the agent/environment interaction, the relation between category (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  42.  7
    Bio-Agency and the Possibility of Artificial Agents.Anne Sophie Meincke - 2018 - In Antonio Piccolomini D’Aragona, Martin Carrier, Roger Deulofeu, Axel Gelfert, Jens Harbecke, Paul Hoyningen-Huene, Lara Huber, Peter Hucklenbroich, Ludger Jansen, Elizaveta Kostrova, Keizo Matsubara, Anne Sophie Meincke, Andrea Reichenberger, Kian Salimkhani & Javier Suárez (eds.), Philosophy of Science: Between the Natural Sciences, the Social Sciences, and the Humanities. Springer Verlag. pp. 65-93.
    Within the philosophy of biology, recently promising steps have been made towards a biologically grounded concept of agency. Agency is described as bio-agency: the intrinsically normative adaptive behaviour of human and non-human organisms, arising from their biological autonomy. My paper assesses the bio-agency approach by examining criticism recently directed by its proponents against the project of embodied robotics. Defenders of the bio-agency approach have claimed that embodied robots do not, and for fundamental reasons cannot, qualify as artificial agents (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  43. Bio-Agency and the Possibility of Artificial Agents.Anne Sophie Meincke - 2018 - In Alexander Christian, David Hommen, Nina Retzlaff & Gerhard Schurz (eds.), Philosophy of Science - Between the Natural Sciences, the Social Sciences, and the Humanities. Selected Papers from the 2016 conference of the German Society of Philosophy of Science. Dordrecht, Netherlands: pp. 65-93.
    Within the philosophy of biology, recently promising steps have been made towards a biologically grounded concept of agency. Agency is described as bio-agency: the intrinsically normative adaptive behaviour of human and non-human organisms, arising from their biological autonomy. My paper assesses the bio-agency approach by examining criticism recently directed by its proponents against the project of embodied robotics. Defenders of the bio-agency approach have claimed that embodied robots do not, and for fundamental reasons cannot, qualify as artificial agents (...)
     
    Export citation  
     
    Bookmark   1 citation  
  44.  13
    Understanding Sophia? On human interaction with artificial agents.Thomas Fuchs - forthcoming - Phenomenology and the Cognitive Sciences:1-22.
    Advances in artificial intelligence create an increasing similarity between the performance of AI systems or AI-based robots and human communication. They raise the questions: whether it is possible to communicate with, understand, and even empathically perceive artificial agents; whether we should ascribe actual subjectivity and thus quasi-personal status to them beyond a certain level of simulation; what will be the impact of an increasing dissolution of the distinction between simulated and real encounters. To answer these questions, the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45.  64
    The Influence of Epistemology on the Design of Artificial Agents.Mark Lee & Nick Lacey - 2003 - Minds and Machines 13 (3):367-395.
    Unlike natural agents, artificial agents are, to varying extent, designed according to sets of principles or assumptions. We argue that the designer’s philosophical position on truth, belief and knowledge has far reaching implications for the design and performance of the resulting agents. Of the many sources of design information and background we believe philosophical theories are under-rated as valuable influences on the design process. To explore this idea we have implemented some computer-based agents with their (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  47.  28
    ConsScale: A pragmatic scale for measuring the level of consciousness in artificial agents.Raul Arrabales, Agapito Ledezma & Araceli Sanchis - 2010 - Journal of Consciousness Studies 17 (3-4):3-4.
    One of the key problems the field of Machine Consciousness is currently facing is the need to accurately assess the potential level of consciousness that an artificial agent might develop. This paper presents a novel artificial consciousness scale designed to provide a pragmatic and intuitive reference in the evaluation of MC implementations. The version of ConsScale described in this work provides a comprehensive evaluation mechanism which enables the estimation of the potential degree of consciousness of most of the (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  48.  28
    Is it possible to grow an I–Thou relation with an artificial agent? A dialogistic perspective.Stefan Trausan-Matu - 2019 - AI and Society 34 (1):9-17.
    The paper analyzes if it is possible to grow an I–Thou relation in the sense of Martin Buber with an artificial, conversational agent developed with Natural Language Processing techniques. The requirements for such an agent, the possible approaches for the implementation, and their limitations are discussed. The relation of the achievement of this goal with the Turing test is emphasized. Novel perspectives on the I–Thou and I–It relations are introduced according to the sociocultural paradigm and Mikhail Bakhtin’s dialogism, polyphony (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  49.  9
    Beyond persons: extending the personal/subpersonal distinction to non-rational animals and artificial agents.Manuel Pinedo-Garcia & Jason Noble - 2008 - Biology and Philosophy 23 (1):87-100.
    The distinction between personal level explanations and subpersonal ones has been subject to much debate in philosophy. We understand it as one between explanations that focus on an agent’s interaction with its environment, and explanations that focus on the physical or computational enabling conditions of such an interaction. The distinction, understood this way, is necessary for a complete account of any agent, rational or not, biological or artificial. In particular, we review some recent research in Artificial Life that (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  50. Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents[REVIEW]Mark Coeckelbergh - 2009 - AI and Society 24 (2):181-189.
1 — 50 / 1000