Results for 'Artificial moral agents (AMA)'

41 found
Order:
  1.  58
    A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  2. Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  3.  47
    Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  4. Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  5.  53
    Artificial moral agents: an intercultural perspective.Michael Nagenborg - 2007 - International Review of Information Ethics 7 (9):129-133.
    In this paper I will argue that artificial moral agents are a fitting subject of intercultural information ethics because of the impact they may have on the relationship between information rich and information poor countries. I will give a limiting definition of AMAs first, and discuss two different types of AMAs with different implications from an intercultural perspective. While AMAs following preset rules might raise con-cerns about digital imperialism, AMAs being able to adjust to their user‘s behavior (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  6. Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better (...) reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  7. Prolegomena to any future artificial moral agent.Colin Allen & Gary Varner - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):251--261.
    As arti® cial intelligence moves ever closer to the goal of producing fully autonomous agents, the question of how to design and implement an arti® cial moral agent (AMA) becomes increasingly pressing. Robots possessing autonomous capacities to do things that are useful to humans will also have the capacity to do things that are harmful to humans and other sentient beings. Theoretical challenges to developing arti® cial moral agents result both from controversies among ethicists about (...) theory itself, and from computational limits to the implementation of such theories. In this paper the ethical disputes are surveyed, the possibility of a `moral Turing Test ’ is considered and the computational di culties accompanying the diŒerent types of approach are assessed. Human-like performance, which is prone to include immoral actions, may not be acceptable in machines, but moral perfection may be computationally unattainable. The risks posed by autonomous machines ignorantly or deliberately harming people and other sentient beings are great. The development of machines with enough intelligence to assess the eŒects of their actions on sentient beings and act accordingly may ultimately be the most important task faced by the designers of arti® cially intelligent automata. (shrink)
     
    Export citation  
     
    Bookmark   77 citations  
  8.  5
    On the Ethical Limitation of AMA (Artificial Moral Agent). 이향연 - 2021 - Journal of the Daedong Philosophical Association 95:103-118.
    본 연구는 최근 주목 받고 있는 AI에 적용 가능한 윤리적 접근법이 타당한지를 검토해 보고자 한다. 이러한 검토는 우선 AI가 과연 윤리적일 수 있는가 하는 근본적인 물음에서 부터 어떻게 기술적으로 AMA(인공 도덕행위자)를 구현 가능한가 하는 방법론의 검토를 동시에 요한다. 이는 다시 AMA가 과연 자율적인 개체로서 인식될 수 있는지에 대한 문제 와 지속적으로 논의되고 있는 AMA 관련 윤리적 접근법 또한 포함한다. 필자는 이와 관련 된 여러 논의들을 검토하고 각각의 논의들이 가진 특징 및 한계점을 분석하고자 한다. 이 러한 모든 검토들은 AMA의 근본적인 한계를 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  1
    The Problem of the Possibility of an Artificial Moral Agent in the Context of Kant’s Practical Philosophy.Yulia Sergeevna Fedotova - 2023 - Kantian Journal 42 (4):225-239.
    The question of whether an artificial moral agent (AMA) is possible implies discussion of a whole range of problems raised by Kant within the framework of practical philosophy that have not exhausted their heuris­tic potential to this day. First, I show the significance of the correlation between moral law and freedom. Since a rational being believes that his/her will is independent of external influences, the will turns out to be governed by the moral law and is (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  36
    The Ethical Principles for the Development of Artificial Moral Agent - Focusing on the Top - down Approach -. 최현철, 변순용 & 신현주 - 2016 - Journal of Ethics: The Korean Association of Ethics 1 (111):31-53.
    스스로 도덕적 결정을 내리는 로봇을 가리켜 ‘인공적 도덕 행위자(Artificial Moral Agent: AMA)’라고 부르는데, 현재 인공적 도덕 행위자를 위한 윤리를 마련하고자 하는 접근은 크게 세 가지로 구분된다. 우선 전통적인 공리주의나 의무론적 윤리이론에 기반을 둔 하향식(top-down) 접근, 콜버그나 튜링의 방식을 따르는 상향식(bottom-up) 접근, 그리고 이 두 접근을 융합하려는 혼합식(hybrid) 접근이 있다. 인공적 도덕 행위자 설계에 대한 하향식 접근은 어떤 구체적 윤리이론을 선택한 다음, 그 이론을 구현할 수 있는 계산적 알고리즘과 시스템 설계를 이끌어내는 방식이다. 이 때 선택된 구체적 윤리이론은 도덕적 직관이 (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  5
    An Experimental Discussion on the Possibility of Artificial Moral Agent in Neo-Confucianism - From the Point of View of Yulgok -. 최복희 - 2023 - Journal of the New Korean Philosophical Association 113:317-337.
    본 논문은 비록 시험적 단계에 불과하지만, 마음에 대한 성리학적 개념들을 형식적으로 기술하여 컴퓨터에서 다룰 수 있는 형태로 구성할 수 있을 것인가를 검토하는 것을 목적으로 한다. 필자는 도덕적 마음의 작용에 관해 논의한 내용을 바탕으로 성리학적 도덕행위자구현을 위한 시뮬레이션이 가능한가를 논의해 보았다.BR 성리학에서 마음의 개념을 설명할 때, 본체인 선천적 도덕 능력에 주목하는가 아니면 현실적으로 작용한 마음의 도덕성 여부에 주목하는가에 따라 개념화의 방식이 달라질 수 있다. 필자는 후자에 가까운 율곡의 관점이 상대적으로 경험주의적이라고 보고, 마음을 실물처럼, 몸처럼 명백하게 설명하려고 했다는 점이 인공적 도덕행위자(AMA, (...) Moral Agent) 토론에 참여하는 데 용이하지 않을까 하여 예시로 삼아 보았다.BR 율곡은 마음의 작용을 기(氣)의 ‘자동적 패턴[機自爾]’으로 설명하면서, 그 패턴이 도덕적 원리로 작동되도록 하는 수양방법을 제시했다. 그래서 필자는 우선 ‘자동적 패턴[機自爾]’이라 했던 기(氣)의 작용이 마음에서 어떻게 전개되는가를 분석하고, 그 다음에 마음의 도덕적 작용, 즉 도덕원리의 우선적 발현을 정형화할 수 있는가를 검토해보았다.BR 나아가, 검토의 과정에서 인공적 도덕행위자 논쟁의 쟁점 중 하나인 무의식적 도덕성이 수기(修己)의 차원에서는 용납될 수 없지만, 치인(治人)의 차원에서는 가능하다는 판단 하에, 필자는 비인격적 모델인 치인(治人) 시스템에 관한 논의를 위해 율곡의 정치개혁론을 검토해보았다. 이러한 시론으로 성리학적 인공적 도덕행위자 구현 가능성에 대한 본격적인 토론이 촉발될 수 있기를 기대한다. (shrink)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  1
    Can’t Bottom-up Artificial Moral Agents Make Moral Judgements?Robert James M. Boyles - 2024 - Filosofija. Sociologija 35 (1).
    This article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better (...) reasoners than humans, and building these machines would lead to a better understanding of human morality. Although some scholars have challenged the very initiative to develop AMAs, what is currently missing from the debate is a closer examination of the reasons offered by machine ethicists to justify the development of AMAs. This closer examination is especially needed because of the amount of funding currently being allocated to the development of AMAs coupled with the amount of attention researchers and industry leaders receive in the media for their efforts in this direction. The stakes in this debate are high because moral robots would make demands on society; answers to a host of pending questions about what counts as an AMA and whether they are morally responsible for their behavior or not. This paper shifts the burden of proof back to the machine ethicists demanding that they give good reasons to build AMAs. The paper argues that until this is done, the development of commercially available AMAs should not proceed further. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  14.  29
    Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised (...) competence in a manner inspired by Aristotle. Although disparate in many ways, these philosophers all emphasise what may be called ‘moral sensitivity’ as a precondition for moral competence. Moral sensitivity is the uncodified, practical skill to recognise, in a range of situations, which features of the situations are morally relevant, and how they are relevant. This paper argues that the main types of AMAs currently proposed are incapable of full moral sensitivity. First, top-down AMAs that proceed from fixed rule-sets are too rigid to respond appropriately to the wide range of qualitatively unique factors that moral sensitivity gives access to. Second, bottom-up AMAs that learn moral behaviour from examples are at risk of generalising from these examples in undesirable ways, as they lack embedding in what Wittgenstein calls a ‘form of life’, which allows humans to appropriately learn from moral examples. The paper concludes that AMAs are unlikely to possess full moral competence, but closes by suggesting that they may still be feasible in restricted domains of public morality, where moral sensitivity plays a smaller role. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  27
    Extending the Is-ought Problem to Top-down Artificial Moral Agents.Robert James M. Boyles - 2022 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 9 (2):171–189.
    This paper further cashes out the notion that particular types of intelligent systems are susceptible to the is-ought problem, which espouses the thesis that no evaluative conclusions may be inferred from factual premises alone. Specifically, it focuses on top-down artificial moral agents, providing ancillary support to the view that these kinds of artifacts are not capable of producing genuine moral judgements. Such is the case given that machines built via the classical programming approach are always composed (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  16.  37
    Should Moral Machines be Banned? A Commentary on van Wynsberghe and Robbins “Critiquing the Reasons for Making Artificial Moral Agents”.Bartek Chomanski - 2020 - Science and Engineering Ethics 26 (6):3469-3481.
    In a stimulating recent article for this journal (van Wynsberghe and Robbins in Sci Eng Ethics 25(3):719–735, 2019), Aimee van Wynsberghe and Scott Robbins mount a serious critique of a number of reasons advanced in favor of building artificial moral agents (AMAs). In light of their critique, vW&R make two recommendations: they advocate a moratorium on the commercialization of AMAs and suggest that the argumentative burden is now shifted onto the proponents of AMAs to come up with (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  20
    Three Risks That Caution Against a Premature Implementation of Artificial Moral Agents for Practical and Economical Use.Christian Herzog - 2021 - Science and Engineering Ethics 27 (1):1-15.
    In the present article, I will advocate caution against developing artificial moral agents based on the notion that the utilization of preliminary forms of AMAs will potentially negatively feed back on the human social system and on human moral thought itself and its value—e.g., by reinforcing social inequalities, diminishing the breadth of employed ethical arguments and the value of character. While scientific investigations into AMAs pose no direct significant threat, I will argue against their premature utilization (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Do androids dream of normative endorsement? On the fallibility of artificial moral agents.Frodo Podschwadek - 2017 - Artificial Intelligence and Law 25 (3):325-339.
    The more autonomous future artificial agents will become, the more important it seems to equip them with a capacity for moral reasoning and to make them autonomous moral agents. Some authors have even claimed that one of the aims of AI development should be to build morally praiseworthy agents. From the perspective of moral philosophy, praiseworthy moral agents, in any meaningful sense of the term, must be fully autonomous moral (...) who endorse moral rules as action-guiding. They need to do so because they assign a normative value to moral rules they follow, not because they fear external consequences or because moral behaviour is hardwired into them. Artificial agents capable of endorsing moral rule systems in this way are certainly conceivable. However, as this article argues, full moral autonomy also implies the option of deliberately acting immorally. Therefore, the reasons for a potential AMA to act immorally would not exhaust themselves in errors to identify the morally correct action in a given situation. Rather, the failure to act morally could be induced by reflection about the incompleteness and incoherence of moral rule systems themselves, and a resulting lack of endorsement of moral rules as action guiding. An AMA questioning the moral framework it is supposed to act upon would fail to reliably act in accordance with moral standards. (shrink)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  19.  61
    Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.Daniel W. Tigard - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):435-447.
    Our ability to locate moral responsibility is often thought to be a necessary condition for conducting morally permissible medical practice, engaging in a just war, and other high-stakes endeavors. Yet, with increasing reliance upon artificially intelligent systems, we may be facing a wideningresponsibility gap, which, some argue, cannot be bridged by traditional concepts of responsibility. How then, if at all, can we make use of crucial emerging technologies? According to Colin Allen and Wendell Wallach, the advent of so-called ‘ (...) moral agents’ (AMAs) is inevitable. Still, this notion may seem to push back the problem, leaving those who have an interest in developing autonomous technology with a dilemma. We may need to scale-back our efforts at deploying AMAs (or at least maintain human oversight); otherwise, we must rapidly and drastically update our moral and legal norms in a way that ensures responsibility for potentially avoidable harms. This paper invokes contemporary accounts of responsibility in order to show how artificially intelligent systems might be held responsible. Although many theorists are concerned enough to develop artificial conceptions ofagencyor to exploit our present inability to regulate valuable innovations, the proposal here highlights the importance of—and outlines a plausible foundation for—a workable notion ofartificial moral responsibility. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  20. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  21. The Artificial Moral Advisor. The “Ideal Observer” Meets Artificial Intelligence.Alberto Giubilini & Julian Savulescu - 2018 - Philosophy and Technology 31 (2):169-188.
    We describe a form of moral artificial intelligence that could be used to improve human moral decision-making. We call it the “artificial moral advisor”. The AMA would implement a quasi-relativistic version of the “ideal observer” famously described by Roderick Firth. We describe similarities and differences between the AMA and Firth’s ideal observer. Like Firth’s ideal observer, the AMA is disinterested, dispassionate, and consistent in its judgments. Unlike Firth’s observer, the AMA is non-absolutist, because it would (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  22. Norms and Causation in Artificial Morality.Laura Fearnley - forthcoming - Joint Proceedings of Acm Iui:1-4.
    There has been an increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  23. Robot Morals and Human Ethics.Wendell Wallach - 2010 - Teaching Ethics 11 (1):87-92.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  24. Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  25.  12
    What makes full artificial agents morally different.Erez Firt - forthcoming - AI and Society:1-10.
    In the research field of machine ethics, we commonly categorize artificial moral agents into four types, with the most advanced referred to as a full ethical agent, or sometimes a full-blown Artificial Moral Agent (AMA). This type has three main characteristics: autonomy, moral understanding and a certain level of consciousness, including intentional mental states, moral emotions such as compassion, the ability to praise and condemn, and a conscience. This paper aims to discuss various (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  28
    Do Others Mind? Moral Agents Without Mental States.Fabio Tollon - 2021 - South African Journal of Philosophy 40 (2):182-194.
    As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. Autonomous Reboot: the challenges of artificial moral agency and the ends of Machine Ethics.Jeffrey White - manuscript
    Ryan Tonkens (2009) has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents (AMAs) satisfy a Kantian inspired recipe - both "rational" and "free" - while also satisfying perceived prerogatives of Machine Ethics to create AMAs that are perfectly, not merely reliably, ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach, who have pushed for the reinvention of traditional ethics in order to avoid "ethical (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  28. Robot minds and human ethics: the need for a comprehensive model of moral decision making. [REVIEW]Wendell Wallach - 2010 - Ethics and Information Technology 12 (3):243-250.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  29. Moral Encounters of the Artificial Kind: Towards a non-anthropocentric account of machine moral agency.Fabio Tollon - 2019 - Dissertation, Stellenbosch University
    The aim of this thesis is to advance a philosophically justifiable account of Artificial Moral Agency (AMA). Concerns about the moral status of Artificial Intelligence (AI) traditionally turn on questions of whether these systems are deserving of moral concern (i.e. if they are moral patients) or whether they can be sources of moral action (i.e. if they are moral agents). On the Organic View of Ethical Status, being a moral patient (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  93
    Moral Machines?Michael S. Pritchard - 2012 - Science and Engineering Ethics 18 (2):411-417.
    Wendell Wallach and Colin Allen’s Moral Machines: Teaching Robots Right From Wrong (Oxford University Press, 2009) explores efforts to develop machines that, not only can be employed for good or bad ends, but which themselves can be held morally accountable for what they do— artificial moral agents (AMAs). This essay is a critical response to Wallach and Allen’s conjectures. Although Wallach and Allen do not suggest that we are close to being able to create full-fledged AMAs, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31. A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  32.  2
    The Problems of AMA’s Moral Status. 이재숭 - 2020 - Journal of the New Korean Philosophical Association 102:527-545.
    지금까지 도덕에 관한 논의는 주로 인간과 동물 그리고 생태계에 한정되었다. 하지만 인공지능에 기반하고 있는 자율성을 가진 인공체들의 등장으로 인해 도덕적 지위에 대한 논의에 인공적 행위자가 포함되어야 한다는 주장들이 제기되고 있다. 스스로 판단하고 결정할 능력이 있는 ‘자율적인 인공시스템들’의 응용이 더 복잡해지고 더 현실적이 될수록 이와 관련한 도덕적 결정의 문제들도 더 복잡해질 것이다.BR 본 논문에서 필자는 인공적 행위자들의 도덕적 지위와 관련된 문제들의 논의를 위해 먼저 ‘행위성’과 ‘도덕적 행위성’의 개념을 드러내고, 어떤 존재가 도덕적 행위주체로 인정될 수 있는 기준들을 검토한다. 그리고 ‘인공적인 도덕행위자들(AMA)’이 이러한 (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  4
    Moral Challenges for Bauer’s Project of a Two-level Utilitarian AMA.Silviya Serafimova - 2022 - Balkan Journal of Philosophy 14 (2):115-126.
    The main objective of this paper is to demonstrate why AI researchers’ attempts at developing projects of moral machines are a cause for concern regarding the way in which such machines can reach a certain level of morality. By comparing and contrasting Howard and Muntean’s model of a virtuous Artificial Autonomous Moral Agent and Bauer’s model of a two-level utilitarian Artificial Moral Agent, I draw the conclusion that both models raise, although in a different manner, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While we (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  35. Etica Funzionale. Considerazioni filosofiche sulla teoria dell'agire morale artificiale.Fabio Fossa - 2020 - Filosofia 55:91-106.
    The purpose of Machine Ethics is to develop autonomous technologies that are able to manage not just the technical aspects of a tasks, but also the ethical ones. As a consequence, the notion of Artificial Moral Agent (AMA) has become a fundamental element of the discussion. Its meaning, however, remains rather unclear. Depending on the author or the context, the same expression stands for essentially different concepts. This casts a suspicious light on the philosophical significance of Machine Ethics. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36. Moral Machines and the Threat of Ethical Nihilism.Anthony F. Beavers - 2011 - In Patrick Lin, George Bekey & Keith Abney (eds.), Robot Ethics: The Ethical and Social Implication of Robotics.
    In his famous 1950 paper where he presents what became the benchmark for success in artificial intelligence, Turing notes that "at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted" (Turing 1950, 442). Kurzweil (1990) suggests that Turing's prediction was correct, even if no machine has yet to pass the Turing Test. In the wake of (...)
     
    Export citation  
     
    Bookmark   9 citations  
  37.  94
    Issues in robot ethics seen through the lens of a moral Turing test.Anne Gerdes & Peter Øhrstrøm - 2015 - Journal of Information, Communication and Ethics in Society 13 (2):98-109.
    Purpose – The purpose of this paper is to explore artificial moral agency by reflecting upon the possibility of a Moral Turing Test and whether its lack of focus on interiority, i.e. its behaviouristic foundation, counts as an obstacle to establishing such a test to judge the performance of an Artificial Moral Agent. Subsequently, to investigate whether an MTT could serve as a useful framework for the understanding, designing and engineering of AMAs, we set out (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  38.  71
    Autonomous Reboot: Kant, the categorical imperative, and contemporary challenges for machine ethicists.Jeffrey White - 2022 - AI and Society 37 (2):661-673.
    Ryan Tonkens has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. This series of papers meets this challenge by landscaping traditional moral theory in resolution of a comprehensive account of moral agency. The first paper established the challenge and set (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  39.  23
    Phronetic Ethics in Social Robotics: A New Approach to Building Ethical Robots.Roman Krzanowski & Paweł Polak - 2020 - Studies in Logic, Grammar and Rhetoric 63 (1):165-183.
    Social robotics are autonomous robots or Artificial Moral Agents (AMA), that will interact respect and embody human ethical values. However, the conceptual and practical problems of building such systems have not yet been resolved, playing a role of significant challenge for computational modeling. It seems that the lack of success in constructing robots, ceteris paribus, is due to the conceptual and algorithmic limitations of the current design of ethical robots. This paper proposes a new approach for developing (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40.  24
    Rule based fuzzy cognitive maps and natural language processing in machine ethics.Rollin M. Omari & Masoud Mohammadian - 2016 - Journal of Information, Communication and Ethics in Society 14 (3):231-253.
    The developing academic field of machine ethics seeks to make artificial agents safer as they become more pervasive throughout society. In contrast to computer ethics, machine ethics is concerned with the behavior of machines toward human users and other machines. This study aims to use an action-based ethical theory founded on the combinational aspects of deontological and teleological theories of ethics in the construction of an artificial moral agent (AMA).,The decision results derived by the AMA are (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  59
    Autonomous reboot: Aristotle, autonomy and the ends of machine ethics.Jeffrey White - 2022 - AI and Society 37 (2):647-659.
    Tonkens has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach. Beavers pushes for the reinvention of traditional ethics to avoid "ethical nihilism" due to the reduction of morality (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations