31 found

View year:

  1. Bridging the civilian-military divide in responsible AI principles and practices.Rachel Azafrani & Abhishek Gupta - 2023 - Ethics and Information Technology 25 (2):1-5.
    Advances in AI research have brought increasingly sophisticated capabilities to AI systems and heightened the societal consequences of their use. Researchers and industry professionals have responded by contemplating responsible principles and practices for AI system design. At the same time, defense institutions are contemplating ethical guidelines and requirements for the development and use of AI for warfare. However, varying ethical and procedural approaches to technological development, research emphasis on offensive uses of AI, and lack of appropriate venues for multistakeholder dialogue (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2.  4
    Anything new under the sun? Insights from a history of institutionalized AI ethics.Simone Casiraghi - 2023 - Ethics and Information Technology 25 (2):1-14.
    Scholars, policymakers and organizations in the EU, especially at the level of the European Commission, have turned their attention to the ethics of (trustworthy and human-centric) Artificial Intelligence (AI). However, there has been little reflexivity on (1) the history of the ethics of AI as an institutionalized phenomenon and (2) the comparison to similar episodes of “ethification” in other fields, to highlight common (unresolved) challenges.Contrary to some mainstream narratives, which stress how the increasing attention to ethical aspects of AI is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  5
    The seven troubles with norm-compliant robots.Tom N. Coggins & Steffen Steinert - 2023 - Ethics and Information Technology 25 (2):1-15.
    Many researchers from robotics, machine ethics, and adjacent fields seem to assume that norms represent good behavior that social robots should learn to benefit their users and society. We would like to complicate this view and present seven key troubles with norm-compliant robots: (1) norm biases, (2) paternalism (3) tyrannies of the majority, (4) pluralistic ignorance, (5) paths of least resistance, (6) outdated norms, and (7) technologically-induced norm change. Because discussions of why norm-compliant robots can be problematic are noticeably absent (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  3
    Selling visibility-boosts on dating apps: a problematic practice?Bouke de Vries - 2023 - Ethics and Information Technology 25 (2):1-8.
    Love, sex, and physical intimacy are some of the most desired goods in life and they are increasingly being sought on dating apps such as Tinder, Bumble, and Badoo. For those who want a leg up in the chase for other people’s attention, almost all of these apps now offer the option of paying a fee to boost one’s visibility for a certain amount of time, which may range from 30 min to a few hours. In this article, I argue (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  15
    (Some) algorithmic bias as institutional bias.Camila Hernandez Flowerman - 2023 - Ethics and Information Technology 25 (2):1-10.
    In this paper I argue that some examples of what we label ‘algorithmic bias’ would be better understood as cases of institutional bias. Even when individual algorithms appear unobjectionable, they may produce biased outcomes given the way that they are embedded in the background structure of our social world. Therefore, the problematic outcomes associated with the use of algorithmic systems cannot be understood or accounted for without a kind of structural account. Understanding algorithmic bias as institutional bias in particular (as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  3
    A systematic review of almost three decades of value sensitive design (VSD): what happened to the technical investigations?Anne Gerdes & Tove Faber Frandsen - 2023 - Ethics and Information Technology 25 (2):1-16.
    This article presents a systematic literature review documenting how technical investigations have been adapted in value sensitive design (VSD) studies from 1996 to 2023. We present a systematic review, including theoretical and applied studies that either discuss or conduct technical investigations in VSD. This systematic review contributes to the VSD community when seeking to further refine the methodological framework for carrying out technical investigations in VSD.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  4
    Why a treaty on autonomous weapons is necessary and feasible.Daan Kayser - 2023 - Ethics and Information Technology 25 (2):1-5.
    Militairy technology is developing at a rapid pace and we are seeing a growing number of weapons with increasing levels of autonomy being developed and deployed. This raises various legal, ethical, and security concerns. The absence of clear international rules setting limits and governing the use of autonomous weapons is extremely concerning. There is an urgent need for the international community to work together towards a treaty not only to safeguard ethical and legal norms, but also for our shared security. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  2
    Has Montefiore and Formosa resisted the Gamer’s Dilemma?Morgan Luck - 2023 - Ethics and Information Technology 25 (2):1-6.
    Montefiore and Formosa (Ethics Inf Technol 24:31, 2022) provide a useful way of narrowing the Gamer’s Dilemma to cases where virtual murder seems morally permissible, but not virtual child molestation. They then resist the dilemma by theorising that the intuitions supporting it are not moral. In this paper, I consider this theory to determine whether the dilemma has been successfully resisted. I offer reason to think that, when considering certain variations of the dilemma, Montefiore and Formosa’s theory may not be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  12
    Correction to: the Ethics of AI in Human Resources.Evgeni Aizenberg & Matthew J. Dennis - 2023 - Ethics and Information Technology 25 (1):1-1.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  4
    Legal reviews of in situ learning in autonomous weapons.Zena Assaad & Tim McFarland - 2023 - Ethics and Information Technology 25 (1):1-10.
    A legal obligation to conduct weapons reviews is a means by which the international community can ensure that States assess whether the use of new types of weapons in armed conflict would raise humanitarian concerns. The use of artificial intelligence in weapon systems greatly complicates the process of conducting reviews, particularly where a weapon system is capable of continuing to ‘learn’ on its own after being deployed on the battlefield. This paper surveys current understandings of the weapons review challenges presented (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11.  3
    Knowledge representation and acquisition for ethical AI: challenges and opportunities.Vaishak Belle - 2023 - Ethics and Information Technology 25 (1):1-12.
    Machine learning (ML) techniques have become pervasive across a range of different applications, and are now widely used in areas as disparate as recidivism prediction, consumer credit-risk analysis, and insurance pricing. Likewise, in the physical world, ML models are critical components in autonomous agents such as robotic surgeons and self-driving cars. Among the many ethical dimensions that arise in the use of ML technology in such applications, analyzing morally permissible actions is both immediate and profound. For example, there is the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  10
    Value Sensitive Design for autonomous weapon systems – a primer.Christine Boshuijzen-van Burken - 2023 - Ethics and Information Technology 25 (1):1-14.
    Value Sensitive Design (VSD) is a design methodology developed by Batya Friedman and Peter Kahn (2003) that brings in moral deliberations in an early stage of a design process. It assumes that neither technology itself is value neutral, nor shifts the value-ladennes to the sole usage of technology. This paper adds to emerging literature onVSD for autonomous weapons systems development and discusses extant literature on values in autonomous systems development in general and in autonomous weapons development in particular. I identify (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  11
    Autonomous Military Systems: collective responsibility and distributed burdens.Niël Henk Conradie - 2023 - Ethics and Information Technology 25 (1):1-14.
    The introduction of Autonomous Military Systems (AMS) onto contemporary battlefields raises concerns that they will bring with them the possibility of a techno-responsibility gap, leaving insecurity about how to attribute responsibility in scenarios involving these systems. In this work I approach this problem in the domain of applied ethics with foundational conceptual work on autonomy and responsibility. I argue that concerns over the use of AMS can be assuaged by recognising the richly interrelated context in which these systems will most (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14.  10
    Artificial intelligence and humanitarian obligations.David Danks & Daniel Trusilo - 2023 - Ethics and Information Technology 25 (1):1-5.
    Artificial Intelligence (AI) offers numerous opportunities to improve military Intelligence, Surveillance, and Reconnaissance operations. And, modern militaries recognize the strategic value of reducing civilian harm. Grounded in these two assertions we focus on the transformative potential that AI ISR systems have for improving the respect for and protection of humanitarian relief operations. Specifically, we propose that establishing an interface for humanitarian organizations to military AI ISR systems can improve the current state of ad-hoc humanitarian notification systems, which are notoriously unreliable (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  18
    Role of emotions in responsible military AI.José Kerstholt, Mark Neerincx, Karel van den Bosch, Jason S. Metcalfe & Jurriaan van Diggelen - 2023 - Ethics and Information Technology 25 (1):1-4.
  16.  15
    Responsible reliance concerning development and use of AI in the military domain.Dustin A. Lewis & Vincent Boulanin - 2023 - Ethics and Information Technology 25 (1):1-5.
    In voicing commitments to the principle that the adoption of artificial-intelligence (AI) tools by armed forces should be done responsibly, a growing number of states have referred to a concept of “Responsible AI.” As part of an effort to help develop the substantive contours of that concept in meaningful ways, this position paper introduces a notion of “responsible reliance.” It is submitted that this notion could help the policy conversation expand from its current relatively narrow focus on interactions between an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  7
    Military artificial intelligence as power: consideration for European Union actorness.Justinas Lingevicius - 2023 - Ethics and Information Technology 25 (1):1-13.
    The article focuses on the inconsistency between the European Commission’s position on excluding military AI from its emerging AI policy, and at the same time EU policy initiatives targeted at supporting military and defence elements of AI on the EU level. It leads to the question, what, then, does the debate on military AI suggest to the EU’s actorness discussed in the light of Europe as a power debate with a particular focus on Normative Power Europe, Market Power Europe, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. The value of responsibility gaps in algorithmic decision-making.Lauritz Munch, Jakob Mainz & Jens Christian Bjerring - 2023 - Ethics and Information Technology 25 (1):1-11.
    Many seem to think that AI-induced responsibility gaps are morally bad and therefore ought to be avoided. We argue, by contrast, that there is at least a pro tanto reason to welcome responsibility gaps. The central reason is that it can be bad for people to be responsible for wrongdoing. This, we argue, gives us one reason to prefer automated decision-making over human decision-making, especially in contexts where the risks of wrongdoing are high. While we are not the first to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  19.  4
    Governing (ir)responsibilities for future military AI systems.Liselotte Polderman - 2023 - Ethics and Information Technology 25 (1):1-4.
  20.  23
    The irresponsibility of not using AI in the military.M. Postma, E. O. Postma, R. H. A. Lindelauf & H. W. Meerveld - 2023 - Ethics and Information Technology 25 (1):1-6.
    The ongoing debate on the ethics of using artificial intelligence (AI) in military contexts has been negatively impacted by the predominant focus on the use of lethal autonomous weapon systems (LAWS) in war. However, AI technologies have a considerably broader scope and present opportunities for decision support optimization across the entire spectrum of the military decision-making process (MDMP). These opportunities cannot be ignored. Instead of mainly focusing on the risks of the use of AI in target engagement, the debate about (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  8
    Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare.Giorgia Pozzi - 2023 - Ethics and Information Technology 25 (1):1-12.
    Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  22.  4
    Ethics of sleep tracking: techno-ethical particularities of consumer-led sleep-tracking with a focus on medicalization, vulnerability, and relationality.Nadia Primc, Jonathan Hunger, Robert Ranisch, Eva Kuhn & Regina Müller - 2023 - Ethics and Information Technology 25 (1):1-12.
    Consumer-targeted sleep tracking applications (STA) that run on mobile devices (e.g., smartphones) promise to be useful tools for the individual user. Assisted by built-in and/or external sensors, these apps can analyze sleep data and generate assessment reports for the user on their sleep duration and quality. However, STA also raise ethical questions, for example, on the autonomy of the sleeping person, or potential effects on third parties. Nevertheless, a specific ethical analysis of the use of these technologies is still missing (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  8
    Legal and ethical implications of autonomous cyber capabilities: a call for retaining human control in cyberspace.Marta Stroppa - 2023 - Ethics and Information Technology 25 (1):1-6.
  24.  12
    Model of a military autonomous device following International Humanitarian Law.Tom van Engers, Jonathan Kwik & Tomasz Zurek - 2023 - Ethics and Information Technology 25 (1):1-12.
    In this paper we introduce a computational control framework that can keep AI-driven military autonomous devices operating within the boundaries set by applicable rules of International Humanitarian Law (IHL) related to targeting. We discuss the necessary legal tests and variables, and introduce the structure of a hypothetical IHL-compliant targeting system.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. Moral autonomy of patients and legal barriers to a possible duty of health related data sharing.Anton Vedder & Daniela Spajić - 2023 - Ethics and Information Technology 25 (1):1-11.
    Informed consent bears significant relevance as a legal basis for the processing of personal data and health data in the current privacy, data protection and confidentiality legislations. The consent requirements find their basis in an ideal of personal autonomy. Yet, with the recent advent of the global pandemic and the increased use of eHealth applications in its wake, a more differentiated perspective with regards to this normative approach might soon gain momentum. This paper discusses the compatibility of a moral duty (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  14
    Design for values and conceptual engineering.Herman Veluwenkamp & Jeroen van den Hoven - 2023 - Ethics and Information Technology 25 (1):1-12.
    Politicians and engineers are increasingly realizing that values are important in the development of technological artefacts. What is often overlooked is that different conceptualizations of these abstract values lead to different design-requirements. For example, designing social media platforms for deliberative democracy sets us up for technical work on completely different types of architectures and mechanisms than designing for so-called liquid or direct forms of democracy. Thinking about Democracy is not enough, we need to design for the proper conceptualization of these (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27.  7
    Who is controlling whom? Reframing “meaningful human control” of AI systems in security.Pascal Vörös, Serhiy Kandul, Thomas Burri & Markus Christen - 2023 - Ethics and Information Technology 25 (1):1-7.
    Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of “meaningful human (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  39
    Prospects for the global governance of autonomous weapons: comparing Chinese, Russian, and US practices.Tom F. A. Watts, Guangyu Qiao-Franco, Anna Nadibaidze, Hendrik Huelss & Ingvild Bode - 2023 - Ethics and Information Technology 25 (1):1-15.
    Technological developments in the sphere of artificial intelligence (AI) inspire debates about the implications of autonomous weapon systems (AWS), which can select and engage targets without human intervention. While increasingly more systems which could qualify as AWS, such as loitering munitions, are reportedly used in armed conflicts, the global discussion about a system of governance and international legal norms on AWS at the United Nations Convention on Certain Conventional Weapons (UN CCW) has stalled. In this article we argue for the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  17
    Autonomous weapon systems and responsibility gaps: a taxonomy.Nathan Gabriel Wood - 2023 - Ethics and Information Technology 25 (1):1-14.
    A classic objection to autonomous weapon systems (AWS) is that these could create so-called responsibility gaps, where it is unclear who should be held responsible in the event that an AWS were to violate some portion of the law of armed conflict (LOAC). However, those who raise this objection generally do so presenting it as a problem for AWS as a whole class of weapons. Yet there exists a rather wide range of systems that can be counted as “autonomous weapon (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  34
    AWS compliance with the ethical principle of proportionality: three possible solutions.Maciek Zając - 2023 - Ethics and Information Technology 25 (1):1-13.
    The ethical Principle of Proportionality requires combatants not to cause collateral harm excessive in comparison to the anticipated military advantage of an attack. This principle is considered a major (and perhaps insurmountable) obstacle to ethical use of autonomous weapon systems (AWS). This article reviews three possible solutions to the problem of achieving Proportionality compliance in AWS. In doing so, I describe and discuss the three components Proportionality judgments, namely collateral damage estimation, assessment of anticipated military advantage, and judgment of “excessiveness”. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31.  14
    The need for and nature of a normative, cultural psychology of weaponized AI (artificial intelligence).Qin Zhu, Ingvild Bode & Rockwell Clancy - 2023 - Ethics and Information Technology 25 (1):1-6.
    The use of AI in weapons systems raises numerous ethical issues. To date, work on weaponized AI has tended to be theoretical and normative in nature, consisting in critical policy analyses and ethical considerations, carried out by philosophers, legal scholars, and political scientists. However, adequately addressing the cultural and social dimensions of technology requires insights and methods from empirical moral and cultural psychology. To do so, this position piece describes the motivations for and sketches the nature of a normative, cultural (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
 Previous issues
  
Next issues