Results for 'Autonomous artificial agent'

999 found
Order:
  1.  76
    What is it like to encounter an autonomous artificial agent?Karsten Weber - 2013 - AI and Society 28 (4):483-489.
    Following up on Thomas Nagel’s paper “What is it like to be a bat?” and Alan Turing’s essay “Computing machinery and intelligence,” it shall be claimed that a successful interaction of human beings and autonomous artificial agents depends more on which characteristics human beings ascribe to the agent than on whether the agent really has those characteristics. It will be argued that Masahiro Mori’s concept of the “uncanny valley” as well as evidence from several empirical studies (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  2.  70
    A Legal Theory for Autonomous Artificial Agents by Samir Chopra and Laurence F White.D. A. Coady - 2012 - Australian Journal of Legal Philosophy 37 (2012):349-50.
    Direct download  
     
    Export citation  
     
    Bookmark  
  3. A minimalist model of the artificial autonomous moral agent (AAMA).Ioan Muntean & Don Howard - 2016 - In SSS-16 Symposium Technical Reports. Association for the Advancement of Artificial Intelligence. AAAI.
    This paper proposes a model for an artificial autonomous moral agent (AAMA), which is parsimonious in its ontology and minimal in its ethical assumptions. Starting from a set of moral data, this AAMA is able to learn and develop a form of moral competency. It resembles an “optimizing predictive mind,” which uses moral data (describing typical behavior of humans) and a set of dispositional traits to learn how to classify different actions (given a given background knowledge) as (...)
     
    Export citation  
     
    Bookmark   5 citations  
  4.  7
    Can Artificial Intelligence be an Autonomous Moral Agent? 신상규 - 2017 - Cheolhak-Korean Journal of Philosophy 132:265-292.
    ‘도덕 행위자(moral agent)’의 개념은 전통적으로 자신의 행동에 대해 책임을 질 수 있는 자유의지를 가진 인격적 존재에 한정되어 적용되었다. 그런데 도덕적 함축을 갖는 다양한 행동을 수행할 수 있는 자율적 AI의 등장은 이러한 행위자 개념의 수정을 요구하는 것처럼 보인다. 필자는 이 논문에서 일정한 요건을 만족시키는 AI에 대해서 인격성을 전제하지 않는 기능적인 의미의 도덕 행위자 자격이 부여될 수 있다고 주장한다. 그리고 그런 한에 있어서, AI에게도 그 행위자성에 걸맞은 책임 혹은 책무성의 귀속이 가능해진다. 이러한 주장을 뒷받침하기 위하여, 본 논문은 예상되는 여러 가능한 반론들을 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5. Modeling artificial agents’ actions in context – a deontic cognitive event ontology.Miroslav Vacura - 2020 - Applied ontology 15 (4):493-527.
    Although there have been efforts to integrate Semantic Web technologies and artificial agents related AI research approaches, they remain relatively isolated from each other. Herein, we introduce a new ontology framework designed to support the knowledge representation of artificial agents’ actions within the context of the actions of other autonomous agents and inspired by standard cognitive architectures. The framework consists of four parts: 1) an event ontology for information pertaining to actions and events; 2) an epistemic ontology (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6.  24
    Autonomous Artificial Intelligence and Liability: a Comment on List.Michael Da Silva - 2022 - Philosophy and Technology 35 (2):1-6.
    Christian List argues that responsibility gaps created by viewing artificial intelligence as intentional agents are problematic enough that regulators should only permit the use of autonomous AI in high-stakes settings where AI is designed to be moral or a liability transfer agreement will fill any gaps. This work challenges List’s proposed condition. A requirement for “moral” AI is too onerous given technical challenges and other ways to check AI quality. Moreover, transfer agreements only plausibly fill responsibility gaps by (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7. Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   59 citations  
  8. The ethics of designing artificial agents.Frances S. Grodzinsky, Keith W. Miller & Marty J. Wolf - 2008 - Ethics and Information Technology 10 (2-3):115-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  9.  18
    Hiding Behind Machines: Artificial Agents May Help to Evade Punishment.Till Feier, Jan Gogoll & Matthias Uhl - 2022 - Science and Engineering Ethics 28 (2):1-19.
    The transfer of tasks with sometimes far-reaching implications to autonomous systems raises a number of ethical questions. In addition to fundamental questions about the moral agency of these systems, behavioral issues arise. We investigate the empirically accessible question of whether the imposition of harm by an agent is systematically judged differently when the agent is artificial and not human. The results of a laboratory experiment suggest that decision-makers can actually avoid punishment more easily by delegating to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  10. A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  11. Anticipatory Functions, Digital-Analog Forms and Biosemiotics: Integrating the Tools to Model Information and Normativity in Autonomous Biological Agents.Argyris Arnellos, Luis Emilio Bruni, Charbel Niño El-Hani & John Collier - 2012 - Biosemiotics 5 (3):331-367.
    We argue that living systems process information such that functionality emerges in them on a continuous basis. We then provide a framework that can explain and model the normativity of biological functionality. In addition we offer an explanation of the anticipatory nature of functionality within our overall approach. We adopt a Peircean approach to Biosemiotics, and a dynamical approach to Digital-Analog relations and to the interplay between different levels of functionality in autonomous systems, taking an integrative approach. We then (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  12. The ethics of designing artificial agents.S. Grodzinsky Frances, W. Miller Keith & J. Wolf Marty - 2008 - Ethics and Information Technology 10 (2-3):112-121.
    In their important paper “Autonomous Agents”, Floridi and Sanders use “levels of abstraction” to argue that computers are or may soon be moral agents. In this paper we use the same levels of abstraction to illuminate differences between human moral agents and computers. In their paper, Floridi and Sanders contributed definitions of autonomy, moral accountability and responsibility, but they have not explored deeply some essential questions that need to be answered by computer scientists who design artificial agents. One (...)
     
    Export citation  
     
    Bookmark   1 citation  
  13. The influence of epistemology on the design of artificial agents.Mark Lee & Nick Lacey - 2003 - Minds and Machines 13 (3):367-395.
    Unlike natural agents, artificial agents are, to varying extent, designed according to sets of principles or assumptions. We argue that the designers philosophical position on truth, belief and knowledge has far reaching implications for the design and performance of the resulting agents. Of the many sources of design information and background we believe philosophical theories are under-rated as valuable influences on the design process. To explore this idea we have implemented some computer-based agents with their control algorithms inspired by (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  14.  26
    Artificial Intelligence in Service of Human Needs: Pragmatic First Steps Toward an Ethics for Semi-Autonomous Agents.Travis N. Rieder, Brian Hutler & Debra J. H. Mathews - 2020 - American Journal of Bioethics Neuroscience 11 (2):120-127.
  15.  56
    Artificial moral agents: creative, autonomous, social. An approach based on evolutionary computation.Ioan Muntean & Don Howard - 2014 - In Johanna Seibt, Raul Hakli & Marco Nørskov (eds.), Frontiers in Artificial Intelligence and Applications.
  16.  20
    Philosophical, Experimental and Synthetic Phenomenology: The Study of Perception for Biological, Artificial Agents and Environments.Carmelo Calì - 2023 - Foundations of Science 28 (4):1111-1124.
    In this paper the relationship between phenomenology of perception and synthetic phenomenology is discussed. Synthetic phenomenology is presented on the basis of the issues in A.I. and Robotics that required to address the question of what enables artificial agents to have phenomenal access to the environment. Phenomenology of perception is construed as a theory with autonomous structure and domain, which can be embedded in a philosophical as well as a scientific theory. Two attempts at specifying the phenomenal content (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17.  13
    A Systematic Approach to Autonomous Agents.Gordana Dodig-Crnkovic & Mark Burgin - 2024 - Philosophies 9 (2):44.
    Agents and agent-based systems are becoming essential in the development of various fields, such as artificial intelligence, ubiquitous computing, ambient intelligence, autonomous computing, and intelligent robotics. The concept of autonomous agents, inspired by the observed agency in living systems, is also central to current theories on the origin, development, and evolution of life. Therefore, it is crucial to develop an accurate understanding of agents and the concept of agency. This paper begins by discussing the role of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. Artificial Moral Cognition: Moral Functionalism and Autonomous Moral Agency.Muntean Ioan & Don Howard - 2017 - In Thomas Powers (ed.), Philosophy and Computing: Essays in Epistemology, Philosophy of Mind, Logic, and Ethics. Springer.
    This paper proposes a model of the Artificial Autonomous Moral Agent (AAMA), discusses a standard of moral cognition for AAMA, and compares it with other models of artificial normative agency. It is argued here that artificial morality is possible within the framework of a “moral dispositional functionalism.” This AAMA is able to “read” the behavior of human actors, available as collected data, and to categorize their moral behavior based on moral patterns herein. The present model (...)
     
    Export citation  
     
    Bookmark   1 citation  
  19.  5
    Can Artificial Intelligence Be an Autonomous Entity? 고인석 - 2017 - Cheolhak-Korean Journal of Philosophy 133:163-187.
    인공지능이 자율성을 가진 존재일 수 있는가? 자율성은 책임과 권리같은 관념의 선결 조건으로 오늘의 사회에서 ‘인공지능이 해내는 일’에 연관된 책임과 공헌을 배분하는 데 적용할 규범의 이론적 기반이 된다는 점에서 이 물음은 현실적으로 중요하다. 공학의 ‘자율적 에이전트’ 개념은 현행의 철학적 논의에도 영향을 미치며, 공학과 철학의 자율성 개념 간의 관계가 명료하게 인식되지 못한 까닭에 야기될 혼란의 개연성이 실재한다. 이런 현실의 문제 상황을 의식하면서, 이 논문은 앞의 물음을 가능성에 관한 물음과 정당성에 관한 물음으로 분석하고, 인공지능의 자율성에 관한 전망을 검토한다. 논의의 과정에서 ‘X가 자율성을 가진다’는 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  43
    Autonomous agents with norms.Frank Dignum - 1999 - Artificial Intelligence and Law 7 (1):69-79.
    In this paper we present some concepts and their relations that are necessary for modeling autonomous agents in an environment that is governed by some (social) norms. We divide the norms over three levels: the private level the contract level and the convention level. We show how deontic logic can be used to model the concepts and how the theory of speech acts can be used to model the generation of (some of) the norms. Finally we give some idea (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  21. Un-making artificial moral agents.Deborah G. Johnson & Keith W. Miller - 2008 - Ethics and Information Technology 10 (2-3):123-133.
    Floridi and Sanders, seminal work, “On the morality of artificial agents” has catalyzed attention around the moral status of computer systems that perform tasks for humans, effectively acting as “artificial agents.” Floridi and Sanders argue that the class of entities considered moral agents can be expanded to include computers if we adopt the appropriate level of abstraction. In this paper we argue that the move to distinguish levels of abstraction is far from decisive on this issue. We also (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  22.  18
    Autonomous agents modelling other agents: A comprehensive survey and open problems.Stefano V. Albrecht & Peter Stone - 2018 - Artificial Intelligence 258 (C):66-95.
  23. Moral Agents or Mindless Machines? A Critical Appraisal of Agency in Artificial Systems.Fabio Tollon - 2019 - Hungarian Philosophical Review 4 (63):9-23.
    In this paper I provide an exposition and critique of Johnson and Noorman’s (2014) three conceptualizations of the agential roles artificial systems can play. I argue that two of these conceptions are unproblematic: that of causally efficacious agency and “acting for” or surrogate agency. Their third conception, that of “autonomous agency,” however, is one I have reservations about. The authors point out that there are two ways in which the term “autonomy” can be used: there is, firstly, the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  24. Virtuous vs. utilitarian artificial moral agents.William A. Bauer - 2020 - AI and Society (1):263-271.
    Given that artificial moral agents—such as autonomous vehicles, lethal autonomous weapons, and automated financial trading systems—are now part of the socio-ethical equation, we should morally evaluate their behavior. How should artificial moral agents make decisions? Is one moral theory better suited than others for machine ethics? After briefly overviewing the dominant ethical approaches for building morality into machines, this paper discusses a recent proposal, put forward by Don Howard and Ioan Muntean (2016, 2017), for an (...) moral agent based on virtue theory. While the virtuous artificial moral agent has various strengths, this paper argues that a rule-based utilitarian approach (in contrast to a strict act-utilitarian approach) is superior because it can capture the most important features of the virtue-theoretic approach while realizing additional significant benefits. Specifically, a 2-level utilitarian artificial moral agent incorporating both established moral rules and a utility calculator is especially well-suited for machine ethics. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  25. Philosophical Signposts for Artificial Moral Agent Frameworks.Robert James M. Boyles - 2017 - Suri 6 (2):92–109.
    This article focuses on a particular issue under machine ethics—that is, the nature of Artificial Moral Agents. Machine ethics is a branch of artificial intelligence that looks into the moral status of artificial agents. Artificial moral agents, on the other hand, are artificial autonomous agents that possess moral value, as well as certain rights and responsibilities. This paper demonstrates that attempts to fully develop a theory that could possibly account for the nature of (...) Moral Agents may consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency. At the very least, the said philosophical concepts may be treated as signposts for further research on how to truly account for the nature of Artificial Moral Agents. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26.  19
    Machine and human agents in moral dilemmas: automation–autonomic and EEG effect.Federico Cassioli, Laura Angioletti & Michela Balconi - forthcoming - AI and Society:1-13.
    Automation is inherently tied to ethical challenges because of its potential involvement in morally loaded decisions. In the present research, participants (n = 34) took part in a moral multi-trial dilemma-based task where the agent (human vs. machine) and the behavior (action vs. inaction) factors were randomized. Self-report measures, in terms of morality, consciousness, responsibility, intentionality, and emotional impact evaluation were gathered, together with electroencephalography (delta, theta, beta, upper and lower alpha, and gamma powers) and peripheral autonomic (electrodermal activity, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical nihilism" due (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  28.  33
    Agents of History: Autonomous agents and crypto-intelligence.Bernard Dionysius Geoghegan - 2008 - Interaction Studies 9 (3):403-414.
    World War II research into cryptography and computing produced methods, instruments and research communities that informed early research into artificial intelligence and semi-autonomous computing. Alan Turing and Claude Shannon in particular adapted this research into early theories and demonstrations of AI based on computers’ abilities to track, predict and compete with opponents. This formed a loosely bound collection of techniques, paradigms, and practices I call crypto-intelligence. Subsequent researchers such as Joseph Weizenbaum adapted crypto-intelligence but also reproduced aspects of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  29.  23
    Agents of History: Autonomous agents and crypto-intelligence.Bernard Dionysius Geoghegan - 2008 - Interaction Studiesinteraction Studies Social Behaviour and Communication in Biological and Artificial Systems 9 (3):403-414.
    World War II research into cryptography and computing produced methods, instruments and research communities that informed early research into artificial intelligence and semi-autonomous computing. Alan Turing and Claude Shannon in particular adapted this research into early theories and demonstrations of AI based on computers’ abilities to track, predict and compete with opponents. This formed a loosely bound collection of techniques, paradigms, and practices I call crypto-intelligence. Subsequent researchers such as Joseph Weizenbaum adapted crypto-intelligence but also reproduced aspects of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about moral theory (...)
     
    Export citation  
     
    Bookmark   75 citations  
  31.  10
    When autonomous agents model other agents: An appeal for altered judgment coupled with mouths, ears, and a little more tape.Jacob W. Crandall - 2020 - Artificial Intelligence 280 (C):103219.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  33.  24
    Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner inspired (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral agents who endorse moral rules as action-guiding. (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  35. A principlist-based study of the ethical design and acceptability of artificial social agents.Paul Formosa - 2023 - International Journal of Human-Computer Studies 172.
    Artificial Social Agents (ASAs), which are AI software driven entities programmed with rules and preferences to act autonomously and socially with humans, are increasingly playing roles in society. As their sophistication grows, humans will share greater amounts of personal information, thoughts, and feelings with ASAs, which has significant ethical implications. We conducted a study to investigate what ethical principles are of relative importance when people engage with ASAs and whether there is a relationship between people’s values and the ethical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  40
    Creating a discoverer: Autonomous knowledge seeking agent[REVIEW]Jan M. Zytkow - 1995 - Foundations of Science 1 (2):253-283.
    Construction of a robot discoverer can be treated as the ultimate success of automated discovery. In order to build such an agent we must understand algorithmic details of the discovery processes and the representation of scientific knowledge needed to support the automation. To understand the discovery process we must build automated systems. This paper investigates the anatomy of a robot-discoverer, examining various components developed and refined to a various degree over two decades. We also clarify the notion of autonomy (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Body Schema in Autonomous Agents.Zachariah A. Neemeh & Christian Kronsted - 2021 - Journal of Artificial Intelligence and Consciousness 1 (8):113-145.
    A body schema is an agent's model of its own body that enables it to act on affordances in the environment. This paper presents a body schema system for the Learning Intelligent Decision Agent (LIDA) cognitive architecture. LIDA is a conceptual and computational implementation of Global Workspace Theory, also integrating other theories from neuroscience and psychology. This paper contends that the ‘body schema' should be split into three separate functions based on the functional role of consciousness in Global (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  55
    What Is the Model of Trust for Multi-agent Systems? Whether or Not E-Trust Applies to Autonomous Agents.Massimo Durante - 2010 - Knowledge, Technology & Policy 23 (3):347-366.
    A socio-cognitive approach to trust can help us envisage a notion of networked trust for multi-agent systems (MAS) based on different interacting agents. In this framework, the issue is to evaluate whether or not a socio-cognitive analysis of trust can apply to the interactions between human and autonomous agents. Two main arguments support two alternative hypothesis; one suggests that only reliance applies to artificial agents, because predictability of agents’ digital interaction is viewed as an absolute value and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  39.  57
    Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘artificial (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  40. Artificial Evil and the Foundation of Computer Ethics.Luciano Floridi & J. W. Sanders - 2001 - Springer Netherlands. Edited by Luciano Floridi & J. W. Sanders.
    Moral reasoning traditionally distinguishes two types of evil:moral (ME) and natural (NE). The standard view is that ME is the product of human agency and so includes phenomena such as war,torture and psychological cruelty; that NE is the product of nonhuman agency, and so includes natural disasters such as earthquakes, floods, disease and famine; and finally, that more complex cases are appropriately analysed as a combination of ME and NE. Recently, as a result of developments in autonomous agents in (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  41. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  42.  8
    Special issue on autonomous agents modelling other agents: Guest editorial.Stefano V. Albrecht, Peter Stone & Michael P. Wellman - 2020 - Artificial Intelligence 285 (C):103292.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  54
    A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  44.  13
    From Pluralistic Normative Principles to Autonomous-Agent Rules.Beverley Townsend, Colin Paterson, T. T. Arvind, Gabriel Nemirovsky, Radu Calinescu, Ana Cavalcanti, Ibrahim Habli & Alan Thomas - 2022 - Minds and Machines 32 (4):683-715.
    With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural (‘SLEEC’) nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  6
    Robot shaping: developing autonomous agents through learning.Marco Dorigo & Marco Colombetti - 1994 - Artificial Intelligence 71 (2):321-370.
  46. Autonomous Weapons Systems and the Moral Equality of Combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 3 (6).
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long- distance, human-guided (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  47.  39
    From Pluralistic Normative Principles to Autonomous-Agent Rules.Beverley Townsend, Colin Paterson, T. T. Arvind, Gabriel Nemirovsky, Radu Calinescu, Ana Cavalcanti, Ibrahim Habli & Alan Thomas - 2022 - Minds and Machines 1:1-33.
    With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  47
    Autonomous weapons systems and the moral equality of combatants.Michael Skerker, Duncan Purves & Ryan Jenkins - 2020 - Ethics and Information Technology 22 (3):197-209.
    To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  49.  53
    Dialogue Games in Multi-Agent Systems.Peter McBurney & Simon Parsons - 2002 - Informal Logic 22 (3).
    Formal dialogue games have been studied in philosophy since at least the time of Aristotle. Recently they have been applied in various contexts in computer science and artificial intelligence, particularly as the basis for interaction between autonomous software agents. We review these applications and discuss the many open research questions and challenges at this exciting interface between philosophy and computer science.
    Direct download (13 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  50. Group Agency and Artificial Intelligence.Christian List - 2021 - Philosophy and Technology (4):1-30.
    The aim of this exploratory paper is to review an under-appreciated parallel between group agency and artificial intelligence. As both phenomena involve non-human goal-directed agents that can make a difference to the social world, they raise some similar moral and regulatory challenges, which require us to rethink some of our anthropocentric moral assumptions. Are humans always responsible for those entities’ actions, or could the entities bear responsibility themselves? Could the entities engage in normative reasoning? Could they even have rights (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   21 citations  
1 — 50 / 999