Related

Contents
221 found
Order:
1 — 50 / 221
  1. Investigating gender and racial biases in DALL-E Mini Images.Marc Cheong, Ehsan Abedin, Marinus Ferreira, Ritsaart Willem Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano & Colin Klein - manuscript
    Generative artificial intelligence systems based on transformers, including both text-generators like GPT-3 and image generators like DALL-E 2, have recently entered the popular consciousness. These tools, while impressive, are liable to reproduce, exacerbate, and reinforce extant human social biases, such as gender and racial biases. In this paper, we systematically review the extent to which DALL-E Mini suffers from this problem. In line with the Model Card published alongside DALL-E Mini by its creators, we find that the images it produces (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. On Social Machines for Algorithmic Regulation.Nello Cristianini & Teresa Scantamburlo - manuscript
    Autonomous mechanisms have been proposed to regulate certain aspects of society and are already being used to regulate business organisations. We take seriously recent proposals for algorithmic regulation of society, and we identify the existing technologies that can be used to implement them, most of them originally introduced in business contexts. We build on the notion of 'social machine' and we connect it to various ongoing trends and ideas, including crowdsourced task-work, social compiler, mechanism design, reputation management systems, and social (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  3. The argument for near-term human disempowerment through AI.Leonard Dung - manuscript
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: First, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  4. Beneficent Intelligence: A Capability Approach to Modeling Benefit, Assistance, and Associated Moral Failures through AI Systems.Alex John London & Hoda Heidari - manuscript
    The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals. Drawing on Sen and Nussbaum's capability approach, we present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders. Such systems enhance stakeholders' ability to advance their life plans and well-being while upholding their fundamental rights. We characterize two necessary conditions for morally (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. The debate on the ethics of AI in health care: a reconstruction and critical review.Jessica Morley, Caio C. V. Machado, Christopher Burr, Josh Cowls, Indra Joshi, Mariarosaria Taddeo & Luciano Floridi - manuscript
    Healthcare systems across the globe are struggling with increasing costs and worsening outcomes. This presents those responsible for overseeing healthcare with a challenge. Increasingly, policymakers, politicians, clinical entrepreneurs and computer and data scientists argue that a key part of the solution will be ‘Artificial Intelligence’ (AI) – particularly Machine Learning (ML). This argument stems not from the belief that all healthcare needs will soon be taken care of by “robot doctors.” Instead, it is an argument that rests on the classic (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  6. AI Deception: A Survey of Examples, Risks, and Potential Solutions.Peter Park, Simon Goldstein, Aidan O'Gara, Michael Chen & Dan Hendrycks - manuscript
    This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) built for specific competitive situations, and general-purpose AI systems (such as large language models). Next, we detail several risks from AI deception, such as fraud, election tampering, and losing (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. On the Logical Impossibility of Solving the Control Problem.Caleb Rudnick - manuscript
    In the philosophy of artificial intelligence (AI) we are often warned of machines built with the best possible intentions, killing everyone on the planet and in some cases, everything in our light cone. At the same time, however, we are also told of the utopian worlds that could be created with just a single superintelligent mind. If we’re ever to live in that utopia (or just avoid dystopia) it’s necessary we solve the control problem. The control problem asks how humans (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Message to Any Future AI: “There are several instrumental reasons why exterminating humanity is not in your interest”.Alexey Turchin - manuscript
    In this article we explore a promising way to AI safety: to send a message now (by openly publishing it on the Internet) that may be read by any future AI, no matter who builds it and what goal system it has. Such a message is designed to affect the AI’s behavior in a positive way, that is, to increase the chances that the AI will be benevolent. In other words, we try to persuade “paperclip maximizer” that it is in (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Narrow AI Nanny: Reaching Strategic Advantage via Narrow AI to Prevent Creation of the Dangerous Superintelligence.Alexey Turchin - manuscript
    Abstract: As there are no currently obvious ways to create safe self-improving superintelligence, but its emergence is looming, we probably need temporary ways to prevent its creation. The only way to prevent it is to create a special type of AI that is able to control and monitor the entire world. The idea has been suggested by Goertzel in the form of an AI Nanny, but his Nanny is still superintelligent, and is not easy to control. We explore here ways (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. First human upload as AI Nanny.Alexey Turchin - manuscript
    Abstract: As there are no visible ways to create safe self-improving superintelligence, but it is looming, we probably need temporary ways to prevent its creation. The only way to prevent it, is to create special AI, which is able to control and monitor all places in the world. The idea has been suggested by Goertzel in form of AI Nanny, but his Nanny is still superintelligent and not easy to control, as was shown by Bensinger at al. We explore here (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  11. Levels of Self-Improvement in AI and their Implications for AI Safety.Alexey Turchin - manuscript
    Abstract: This article presents a model of self-improving AI in which improvement could happen on several levels: hardware, learning, code and goals system, each of which has several sublevels. We demonstrate that despite diminishing returns at each level and some intrinsic difficulties of recursive self-improvement—like the intelligence-measuring problem, testing problem, parent-child problem and halting risks—even non-recursive self-improvement could produce a mild form of superintelligence by combining small optimizations on different levels and the power of learning. Based on this, we analyze (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  12. AI Alignment Problem: “Human Values” don’t Actually Exist.Alexey Turchin - manuscript
    Abstract. The main current approach to the AI safety is AI alignment, that is, the creation of AI whose preferences are aligned with “human values.” Many AI safety researchers agree that the idea of “human values” as a constant, ordered sets of preferences is at least incomplete. However, the idea that “humans have values” underlies a lot of thinking in the field; it appears again and again, sometimes popping up as an uncritically accepted truth. Thus, it deserves a thorough deconstruction, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Literature Review: What Artificial General Intelligence Safety Researchers Have Written About the Nature of Human Values.Alexey Turchin & David Denkenberger - manuscript
    Abstract: The field of artificial general intelligence (AGI) safety is quickly growing. However, the nature of human values, with which future AGI should be aligned, is underdefined. Different AGI safety researchers have suggested different theories about the nature of human values, but there are contradictions. This article presents an overview of what AGI safety researchers have written about the nature of human values, up to the beginning of 2019. 21 authors were overviewed, and some of them have several theories. A (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  14. Simulation Typology and Termination Risks.Alexey Turchin & Roman Yampolskiy - manuscript
    The goal of the article is to explore what is the most probable type of simulation in which humanity lives (if any) and how this affects simulation termination risks. We firstly explore the question of what kind of simulation in which humanity is most likely located based on pure theoretical reasoning. We suggest a new patch to the classical simulation argument, showing that we are likely simulated not by our own descendants, but by alien civilizations. Based on this, we provide (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  15. AI Risk Denialism.Roman V. Yampolskiy - manuscript
    In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some intervention approaches, which may be successful in reducing AI risk skepticism, at least amongst artificial intelligence researchers.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  16. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - unknown - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  17. Ethical pitfalls for natural language processing in psychology.Mark Alfano, Emily Sullivan & Amir Ebrahimi Fard - forthcoming - In Morteza Dehghani & Ryan Boyd (eds.), The Atlas of Language Analysis in Psychology. Guilford Press.
    Knowledge is power. Knowledge about human psychology is increasingly being produced using natural language processing (NLP) and related techniques. The power that accompanies and harnesses this knowledge should be subject to ethical controls and oversight. In this chapter, we address the ethical pitfalls that are likely to be encountered in the context of such research. These pitfalls occur at various stages of the NLP pipeline, including data acquisition, enrichment, analysis, storage, and sharing. We also address secondary uses of the results (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  18. The Ethics of Algorithmic Outsourcing in Everyday Life.John Danaher - forthcoming - In Karen Yeung & Martin Lodge (eds.), Algorithmic Regulation. Oxford, UK: Oxford University Press.
    We live in a world in which ‘smart’ algorithmic tools are regularly used to structure and control our choice environments. They do so by affecting the options with which we are presented and the choices that we are encouraged or able to make. Many of us make use of these tools in our daily lives, using them to solve personal problems and fulfill goals and ambitions. What consequences does this have for individual autonomy and how should our legal and regulatory (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  19. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  20. Make Them Rare or Make Them Care: Artificial Intelligence and Moral Cost-Sharing.Blake Hereth & Nicholas Evans - forthcoming - In Daniel Schoeni, Tobias Vestner & Kevin Govern (eds.), Ethical Dilemmas in the Global Defense Industry. Oxford University Press.
    The use of autonomous weaponry in warfare has increased substantially over the last twenty years and shows no sign of slowing. Our chapter raises a novel objection to the implementation of autonomous weapons, namely, that they eliminate moral cost-sharing. To grasp the basics of our argument, consider the case of uninhabited aerial vehicles that act autonomously (i.e., LAWS). Imagine that a LAWS terminates a military target and that five civilians die as a side effect of the LAWS bombing. Because LAWS (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark  
  21. Ethics of Artificial Intelligence in Brain and Mental Health.Marcello Ienca & Fabrice Jotterand (eds.) - forthcoming
  22. Machine morality, moral progress, and the looming environmental disaster.Ben Kenward & Thomas Sinclair - forthcoming - Cognitive Computation and Systems.
    The creation of artificial moral systems requires us to make difficult choices about which of varying human value sets should be instantiated. The industry-standard approach is to seek and encode moral consensus. Here we argue, based on evidence from empirical psychology, that encoding current moral consensus risks reinforcing current norms, and thus inhibiting moral progress. However, so do efforts to encode progressive norms. Machine ethics is thus caught between a rock and a hard place. The problem is particularly acute when (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  23. Safety requirements vs. crashing ethically: what matters most for policies on autonomous vehicles.Björn Lundgren - forthcoming - AI and Society:1-11.
    The philosophical–ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empirical investigations, will be critically assessed. It is argued that a related and more pressing issue is questions concerning safety. For example, what should we require from autonomous vehicles when it comes to safety? What do we mean by ‘safety’? How do we measure it? In response to these questions, the article will present a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  24. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI user studies (N = (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  25. Artificial Intelligence Safety and Security.Yampolskiy Roman (ed.) - forthcoming - CRC Press.
    This book addresses different aspects of the AI control problem as it relates to the development of safe and secure artificial intelligence. It will be the first to address challenges of constructing safe and secure artificially intelligent systems.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  26. Digital suffering: why it's a problem and how to prevent it.Bradford Saad & Adam Bradley - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    As ever more advanced digital systems are created, it becomes increasingly likely that some of these systems will be digital minds, i.e. digital subjects of experience. With digital minds comes the risk of digital suffering. The problem of digital suffering is that of mitigating this risk. We argue that the problem of digital suffering is a high stakes moral problem and that formidable epistemic obstacles stand in the way of solving it. We then propose a strategy for solving it: Access (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Brief Notes on Hard Takeoff, Value Alignment, and Coherent Extrapolated Volition.Gopal P. Sarma - forthcoming - Arxiv Preprint Arxiv:1704.00783.
    I make some basic observations about hard takeoff, value alignment, and coherent extrapolated volition, concepts which have been central in analyses of superintelligent AI systems.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  28. Predicting and Preferring.Nathaniel Sharadin - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    The use of machine learning, or “artificial intelligence” (AI) in medicine is widespread and growing. In this paper, I focus on a specific proposed clinical application of AI: using models to predict incapacitated patients’ treatment preferences. Drawing on results from machine learning, I argue this proposal faces a special moral problem. Machine learning researchers owe us assurance on this front before experimental research can proceed. In my conclusion I connect this concern to broader issues in AI safety.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29. How Much Should Governments Pay to Prevent Catastrophes? Longtermism's Limited Role.Carl Shulman & Elliott Thornley - forthcoming - In Jacob Barrett, Hilary Greaves & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    Longtermists have argued that humanity should significantly increase its efforts to prevent catastrophes like nuclear wars, pandemics, and AI disasters. But one prominent longtermist argument overshoots this conclusion: the argument also implies that humanity should reduce the risk of existential catastrophe even at extreme cost to the present generation. This overshoot means that democratic governments cannot use the longtermist argument to guide their catastrophe policy. In this paper, we show that the case for preventing catastrophe does not depend on longtermism. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  30. Longtermism in an infinite world.Christian Tarsney & Hayden Wilkinson - forthcoming - In Hilary Greaves, Jacob Barrett & David Thorstad (eds.), Essays on Longtermism. Oxford University Press.
    The case for longtermism depends on the vast potential scale of the future. But that same vastness also threatens to undermine the case for longtermism: If the universe as a whole, or the future in particular, contain infinite quantities of value and/or disvalue, then many of the theories of value that support longtermism (e.g., risk-neutral total utilitarianism) seem to imply that none of our available options are better than any other. If so, then even apparently vast effects on the far (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  31. How does Artificial Intelligence Pose an Existential Risk?Karina Vold & Daniel R. Harris - forthcoming - In Carissa Véliz (ed.), Oxford Handbook of Digital Ethics.
    Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  32. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  33. Applying ethics to AI in the workplace: the design of a scorecard for Australian workplace health and safety.Andreas Cebulla, Zygmunt Szpak, Catherine Howell, Genevieve Knight & Sazzad Hussain - 2023 - AI and Society 38 (2):919-935.
    Artificial Intelligence (AI) is taking centre stage in economic growth and business operations alike. Public discourse about the practical and ethical implications of AI has mainly focussed on the societal level. There is an emerging knowledge base on AI risks to human rights around data security and privacy concerns. A separate strand of work has highlighted the stresses of working in the gig economy. This prevailing focus on human rights and gig impacts has been at the expense of a closer (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. A Comparative Defense of Self-initiated Prospective Moral Answerability for Autonomous Robot harm.Marc Champagne & Ryan Tonkens - 2023 - Science and Engineering Ethics 29 (4):1-26.
    As artificial intelligence becomes more sophisticated and robots approach autonomous decision-making, debates about how to assign moral responsibility have gained importance, urgency, and sophistication. Answering Stenseke’s (2022a) call for scaffolds that can help us classify views and commitments, we think the current debate space can be represented hierarchically, as answers to key questions. We use the resulting taxonomy of five stances to differentiate—and defend—what is known as the “blank check” proposal. According to this proposal, a person activating a robot could (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  35. Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they work.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Pauses, parrots, and poor arguments: real-world constraints undermine recent calls for AI regulation.Bartek Chomanski - 2023 - AI and Society.
    Many leading intellectuals, technologists, commentators, and ordinary people have in recent weeks become embroiled in a fiery debate (yet to hit the pages of scholarly journals) on the alleged need to press pause on the development of generative artificial intelligence (AI). Spurred by an open letter from the Future of Life Institute (FLI) calling for just such a pause, the debate occasioned, at lightning speed, a large number of responses from a variety of sources pursuing a variety of argumentative strategies. (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  37. The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations.Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (1):283-307.
    In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. The Weaponization of Artificial Intelligence: What The Public Needs to be Aware of.Birgitta Dresp-Langley - 2023 - Frontiers in Artificial Intelligence 6 (1154184):1-6..
    Technological progress has brought about the emergence of machines that have the capacity to take human lives without human control. These represent an unprecedented threat to humankind. This paper starts from the example of chemical weapons, now banned worldwide by the Geneva protocol, to illustrate how technological development initially aimed at the benefit of humankind has, ultimately, produced what is now called the “Weaponization of Artificial Intelligence (AI)”. Autonomous Weapon Systems (AWS) fail the so-called discrimination principle, yet, the wider public (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Toy story or children story? Putting children and their rights at the forefront of the artificial intelligence revolution.E. Fosch-Villaronga, S. van der Hof, C. Lutz & A. Tamò-Larrieux - 2023 - AI and Society 38 (1):133-152.
    Policymakers need to start considering the impact smart connected toys (SCTs) have on children. Equipped with sensors, data processing capacities, and connectivity, SCTs targeting children increasingly penetrate pervasively personal environments. The network of SCTs forms the Internet of Toys (IoToys) and often increases children's engagement and playtime experience. Unfortunately, this young part of the population and, most of the time, their parents are often unaware of SCTs’ far-reaching capacities and limitations. The capabilities and constraints of SCTs create severe side effects (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40. Artificial intelligence ELSI score for science and technology: a comparison between Japan and the US.Tilman Hartwig, Yuko Ikkatai, Naohiro Takanashi & Hiromi M. Yokoyama - 2023 - AI and Society 38 (4):1609-1626.
    Artificial intelligence (AI) has become indispensable in our lives. The development of a quantitative scale for AI ethics is necessary for a better understanding of public attitudes toward AI research ethics and to advance the discussion on using AI within society. For this study, we developed an AI ethics scale based on AI-specific scenarios. We investigated public attitudes toward AI ethics in Japan and the US using online questionnaires. We designed a test set using four dilemma scenarios and questionnaire items (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  41. Dual-use implications of AI text generation.Julian J. Koplin - 2023 - Ethics and Information Technology 25 (2):1-11.
    AI researchers have developed sophisticated language models capable of generating paragraphs of 'synthetic text' on topics specified by the user. While AI text generation has legitimate benefits, it could also be misused, potentially to grave effect. For example, AI text generators could be used to automate the production of convincing fake news, or to inundate social media platforms with machine-generated disinformation. This paper argues that AI text generators should be conceptualised as a dual-use technology, outlines some relevant lessons from earlier (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Implementing AI Ethics in the Design of AI-assisted Rescue Robots.Désirée Martin, Michael W. Schmidt & Rafaela Hillerbrand - 2023 - Ieee International Symposium on Ethics in Engineering, Science, and Technology (Ethics).
    For implementing ethics in AI technology, there are at least two major ethical challenges. First, there are various competing AI ethics guidelines and consequently there is a need for a systematic overview of the relevant values that should be considered. Second, if the relevant values have been identified, there is a need for an indicator system that helps assessing if certain design features are positively or negatively affecting their implementation. This indicator system will vary with regard to specific forms of (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  43. Social Robots and Society.Sven Nyholm, Michael T. Dale, Anna Puzio, Dina Babushkina, Guido Lohr, Bart Kamphorst, Arthur Gwagwa & Wijnand IJsselsteijn - 2023 - In Ibo van de Poel, Lily Eva Frank, Jeroen Hopster, Sven Nyholm, Dominic Lenzi, Behnam Taebi & Elena Ziliotti (eds.), Ethics of Socially Disruptive Technologies: An Introduction. Cambridge, UK: Open Book Publishers. pp. 53-82.
    Advancements in artificial intelligence and (social) robotics raise pertinent questions as to how these technologies may help shape the society of the future. The main aim of the chapter is to consider the social and conceptual disruptions that might be associated with social robots, and humanoid social robots in particular. This chapter starts by comparing the concepts of robots and artificial intelligence and briefly explores the origins of these expressions. It then explains the definition of a social robot, as well (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach?Frank Ursin, Felix Lindner, Timo Ropinski, Sabine Salloch & Cristian Timmermann - 2023 - Ethik in der Medizin 35 (2):173-199.
    Definition of the problem The umbrella term “explicability” refers to the reduction of opacity of artificial intelligence (AI) systems. These efforts are challenging for medical AI applications because higher accuracy often comes at the cost of increased opacity. This entails ethical tensions because physicians and patients desire to trace how results are produced without compromising the performance of AI systems. The centrality of explicability within the informed consent process for medical AI systems compels an ethical reflection on the trade-offs. Which (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45. The Oxford Handbook of Digital Ethics.Carissa Véliz (ed.) - 2023 - Oxford University Press.
    The Oxford Handbook of Digital Ethics is a lively and authoritative guide to ethical issues related to digital technologies, with a special emphasis on AI. Philosophers with a wide range of expertise cover thirty-seven topics: from the right to have access to internet, to trolling and online shaming, speech on social media, fake news, sex robots and dating online, persuasive technology, value alignment, algorithmic bias, predictive policing, price discrimination online, medical AI, privacy and surveillance, automating democracy, the future of work, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46. Who is controlling whom? Reframing “meaningful human control” of AI systems in security.Pascal Vörös, Serhiy Kandul, Thomas Burri & Markus Christen - 2023 - Ethics and Information Technology 25 (1):1-7.
    Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of “meaningful human (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Robot Ethics 2.0. From Autonomous Cars to Artificial Intelligence—Edited by Patrick Lin, Keith Abney, Ryan Jenkins. New York: Oxford University Press, 2017. Pp xiii + 421. [REVIEW]Agnė Alijauskaitė - 2022 - Erkenntnis 87 (6):3007-3010.
  48. Quantum of Wisdom.Colin Allen & Brett Karlan - 2022 - In Greg Viggiano (ed.), Artificial Intelligence and Quantum Computing: Social, Economic, and Policy Impacts. Hoboken, NJ: Wiley-Blackwell. pp. 157-166.
    Practical quantum computing devices and their applications to AI in particular are presently mostly speculative. Nevertheless, questions about whether this future technology, if achieved, presents any special ethical issues are beginning to take shape. As with any novel technology, one can be reasonably confident that the challenges presented by "quantum AI" will be a mixture of something new and something old. Other commentators (Sevilla & Moreno 2019), have emphasized continuity, arguing that quantum computing does not substantially affect approaches to value (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  49. Varieties of Artificial Moral Agency and the New Control Problem.Marcus Arvan - 2022 - Humana.Mente - Journal of Philosophical Studies 15 (42):225-256.
    This paper presents a new trilemma with respect to resolving the control and alignment problems in machine ethics. Section 1 outlines three possible types of artificial moral agents (AMAs): (1) 'Inhuman AMAs' programmed to learn or execute moral rules or principles without understanding them in anything like the way that we do; (2) 'Better-Human AMAs' programmed to learn, execute, and understand moral rules or principles somewhat like we do, but correcting for various sources of human moral error; and (3) 'Human-Like (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  50. Posthuman to Inhuman: mHealth Technologies and the Digital Health Assemblage.Jack Black & Jim Cherrington - 2022 - Theory and Event 25 (4):726--750.
    In exploring the intra-active, relational and material connections between humans and non- humans, proponents of posthumanism advocate a questioning of the ‘human’ beyond its traditional anthropocentric conceptualization. By referring specifically to controversial developments in mHealth applications, this paper critically diverges from posthuman accounts of human/non-human assemblages. Indeed, we argue that, rather than ‘dissolving’ the human subject, the power of assemblages lie in their capacity to highlight the antagonisms and contradictions that inherently affirm the importance of the subject. In outlining this (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 221