Switch to: References

Add citations

You must login to add citations.
  1. Strictly Human: Limitations of Autonomous Systems.Sadjad Soltanzadeh - 2022 - Minds and Machines 32 (2):269-288.
    Can autonomous systems replace humans in the performance of their activities? How does the answer to this question inform the design of autonomous systems? The study of technical systems and their features should be preceded by the study of the activities in which they play roles. Each activity can be described by its overall goals, governing norms and the intermediate steps which are taken to achieve the goals and to follow the norms. This paper uses the activity realist approach to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Who is controlling whom? Reframing “meaningful human control” of AI systems in security.Pascal Vörös, Serhiy Kandul, Thomas Burri & Markus Christen - 2023 - Ethics and Information Technology 25 (1):1-7.
    Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of “meaningful human (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Weak Signal-Oriented Investigation of Ethical Dissonance Applied to Unsuccessful Mobility Experiences Linked to Human–Machine Interactions.F. Vanderhaegen - 2021 - Science and Engineering Ethics 27 (1):1-25.
    Ethical dissonance arises from conflicts between beliefs or behaviors and affects ethical factors such as normality or conformity. This paper proposes a weak signal-oriented framework to investigate ethical dissonance from experiences linked to human–machine interactions. It is based on a systems engineering principle called human-systems inclusion, which considers any experience feedback of weak signals as beneficial to learn. The framework studies weak signal-based scenarios from testimonies of individual experiences and these scenarios are assessed by other people. For this purpose, the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The Future of War: The Ethical Potential of Leaving War to Lethal Autonomous Weapons.Steven Umbrello, Phil Torres & Angelo F. De Bellis - 2020 - AI and Society 35 (1):273-282.
    Lethal Autonomous Weapons (LAWs) are robotic weapons systems, primarily of value to the military, that could engage in offensive or defensive actions without human intervention. This paper assesses and engages the current arguments for and against the use of LAWs through the lens of achieving more ethical warfare. Specific interest is given particularly to ethical LAWs, which are artificially intelligent weapons systems that make decisions within the bounds of their ethics-based code. To ensure that a wide, but not exhaustive, survey (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Comparative Analysis of the Definitions of Autonomous Weapons Systems.Mariarosaria Taddeo & Alexander Blanchard - 2022 - Science and Engineering Ethics 28 (5):1-22.
    In this report we focus on the definition of autonomous weapons systems (AWS). We provide a comparative analysis of existing official definitions of AWS as provided by States and international organisations, like ICRC and NATO. The analysis highlights that the definitions draw focus on different aspects of AWS and hence lead to different approaches to address the ethical and legal problems of these weapons systems. This approach is detrimental both in terms of fostering an understanding of AWS and in facilitating (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Customizable Ethics Settings for Building Resilience and Narrowing the Responsibility Gap: Case Studies in the Socio-Ethical Engineering of Autonomous Systems.Sadjad Soltanzadeh, Jai Galliott & Natalia Jevglevskaja - 2020 - Science and Engineering Ethics 26 (5):2693-2708.
    Ethics settings allow for morally significant decisions made by humans to be programmed into autonomous machines, such as autonomous vehicles or autonomous weapons. Customizable ethics settings are a type of ethics setting in which the users of autonomous machines make such decisions. Here two arguments are provided in defence of customizable ethics settings. Firstly, by approaching ethics settings in the context of failure management, it is argued that customizable ethics settings are instrumentally and inherently valuable for building resilience into the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • “Trust but Verify”: The Difficulty of Trusting Autonomous Weapons Systems.Heather M. Roff & David Danks - 2018 - Journal of Military Ethics 17 (1):2-20.
    ABSTRACTAutonomous weapons systems pose many challenges in complex battlefield environments. Previous discussions of them have largely focused on technological or policy issues. In contrast, we focus here on the challenge of trust in an AWS. One type of human trust depends only on judgments about the predictability or reliability of the trustee, and so are suitable for all manner of artifacts. However, AWSs that are worthy of the descriptor “autonomous” will not exhibit the required strong predictability in the complex, changing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • The irresponsibility of not using AI in the military.M. Postma, E. O. Postma, R. H. A. Lindelauf & H. W. Meerveld - 2023 - Ethics and Information Technology 25 (1):1-6.
    The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. However, AI technologies have a considerably broader scope and present opportunities for decision support optimization across the entire spectrum of the military decision-making process (MDMP). These opportunities cannot be ignored. Instead of mainly focusing on the risks of the use of AI in target engagement, the debate about (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The race for an artificial general intelligence: implications for public policy.Wim Naudé & Nicola Dimitri - 2020 - AI and Society 35 (2):367-379.
    An arms race for an artificial general intelligence would be detrimental for and even pose an existential threat to humanity if it results in an unfriendly AGI. In this paper, an all-pay contest model is developed to derive implications for public policy to avoid such an outcome. It is established that, in a winner-takes-all race, where players must invest in R&D, only the most competitive teams will participate. Thus, given the difficulty of AGI, the number of competing teams is unlikely (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial intelligence and responsibility.Lode Lauwaert - 2021 - AI and Society 36 (3):1001-1009.
    In the debate on whether to ban LAWS, moral arguments are mainly used. One of these arguments, proposed by Sparrow, is that the use of LAWS goes hand in hand with the responsibility gap. Together with the premise that the ability to hold someone responsible is a necessary condition for the admissibility of an act, Sparrow believes that this leads to the conclusion that LAWS should be prohibited. In this article, it will be shown that Sparrow’s argumentation for both premises (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • The AI Commander Problem: Ethical, Political, and Psychological Dilemmas of Human-Machine Interactions in AI-enabled Warfare.James Johnson - 2022 - Journal of Military Ethics 21 (3):246-271.
    Can AI solve the ethical, moral, and political dilemmas of warfare? How is artificial intelligence (AI)-enabled warfare changing the way we think about the ethical-political dilemmas and practice of war? This article explores the key elements of the ethical, moral, and political dilemmas of human-machine interactions in modern digitized warfare. It provides a counterpoint to the argument that AI “rational” efficiency can simultaneously offer a viable solution to human psychological and biological fallibility in combat while retaining “meaningful” human control over (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • The Problem with Killer Robots.Nathan Gabriel Wood - 2020 - Journal of Military Ethics 19 (3):220-240.
    Warfare is becoming increasingly automated, from automatic missile defense systems to micro-UAVs (WASPs) that can maneuver through urban environments with ease, and each advance brings with it ethical questions in need of resolving. Proponents of lethal autonomous weapons systems (LAWS) provide varied arguments in their favor; robots are capable of better identifying combatants and civilians, thus reducing "collateral damage"; robots need not protect themselves and so can incur more risks to protect innocents or gather more information before using deadly force; (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • AI and Spinoza: a review of law’s conceptual treatment of Lethal Autonomous. [REVIEW]Moa De Lucia Dahlbeck - forthcoming - AI and Society:1-9.
    In this article I will argue that the philosophy of Benedict Spinoza may assist us in coming to terms with some of the conceptual challenges that the phenomenon of Artificial Intelligence poses on law and legal thought. I will pursue this argument in three steps. First, I will suggest that Spinoza’s philosophy of the mind and knowledge may function as an analytical tool for making sense of the prevailing conception of AI within the legal discourse on Lethal Autonomous Weapons Systems. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Autonomous weapons systems and the necessity of interpretation: what Heidegger can tell us about automated warfare.Kieran M. Brayford - forthcoming - AI and Society:1-9.
    Despite resistance from various societal actors, the development and deployment of lethal autonomous weaponry to warzones is perhaps likely, considering the perceived operational and ethical advantage such weapons are purported to bring. In this paper, it is argued that the deployment of truly autonomous weaponry presents an ethical danger by calling into question the ability of such weapons to abide by the Laws of War. This is done by noting the resonances between battlefield target identification and the process of ontic-ontological (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • “The Sort of War They Deserve”? The Ethics of Emerging Air Power and the Debate over Warbots.Benjamin R. Banta - 2018 - Journal of Military Ethics 17 (2):156-171.
    As new military technologies change the character of war by empowering agents in new ways, it can become more difficult for our ethics of war to achieve the right balance between moral principle and necessity. Indeed, there is an ever-growing literature that seeks to apply, defend and / or update the ethics of war in light of what is often claimed to be an unprecedented period of rapid advancement in military robotics, or warbots. To increase confidence that our approach to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark