Results for 'AI regulation'

985 found
Order:
  1.  42
    Pauses, parrots, and poor arguments: real-world constraints undermine recent calls for AI regulation.Bartek Chomanski - 2023 - AI and Society.
    Many leading intellectuals, technologists, commentators, and ordinary people have in recent weeks become embroiled in a fiery debate (yet to hit the pages of scholarly journals) on the alleged need to press pause on the development of generative artificial intelligence (AI). Spurred by an open letter from the Future of Life Institute (FLI) calling for just such a pause, the debate occasioned, at lightning speed, a large number of responses from a variety of sources pursuing a variety of argumentative strategies. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  33
    Conformity Assessments and Post-market Monitoring: A Guide to the Role of Auditing in the Proposed European AI Regulation.Jakob Mökander, Maria Axente, Federico Casolari & Luciano Floridi - 2022 - Minds and Machines 32 (2):241-268.
    The proposed European Artificial Intelligence Act (AIA) is the first attempt to elaborate a general legal framework for AI carried out by any major global economy. As such, the AIA is likely to become a point of reference in the larger discourse on how AI systems can (and should) be regulated. In this article, we describe and discuss the two primary enforcement mechanisms proposed in the AIA: the _conformity assessments_ that providers of high-risk AI systems are expected to conduct, and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  3.  28
    Decentered ethics in the machine era and guidance for AI regulation.Christian Hugo Hoffmann & Benjamin Hahn - 2020 - AI and Society 35 (3):635-644.
    Recent advancements in AI have prompted a large number of AI ethics guidelines published by governments and nonprofits. While many of these papers propose concrete or seemingly applicable ideas, few philosophically sound proposals are made. In particular, we observe that the line of questioning has often not been examined critically and underlying conceptual problems not always dealt with at the root. In this paper, we investigate the nature of ethical AI systems and what their moral status might be by first (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  4.  4
    Pauses, parrots, and poor arguments: real-world constraints undermine recent calls for AI regulation.Bartlomiej Chomanski - forthcoming - AI and Society:1-3.
  5. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  6.  83
    AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  7.  30
    Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?Paul B. de Laat - 2021 - Philosophy and Technology 34 (4):1135-1193.
    The term ‘responsible AI’ has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the ‘Partnership on AI’. By means of a comprehensive web search, two questions are addressed by this study: (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  8.  9
    Regulating AI in Health Care: The Challenges of Informed User Engagement.Olya Kudina - 2021 - Hastings Center Report 51 (5):6-7.
    Hastings Center Report, Volume 51, Issue 5, Page 6-7, September‐October 2021.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  33
    An Institutionalist Approach to AI Ethics: Justifying the Priority of Government Regulation over Self-Regulation.Thomas Ferretti - 2022 - Moral Philosophy and Politics 9 (2):239-265.
    This article explores the cooperation of government and the private sector to tackle the ethical dimension of artificial intelligence. The argument draws on the institutionalist approach in philosophy and business ethics defending a ‘division of moral labor’ between governments and the private sector. The goal and main contribution of this article is to explain how this approach can provide ethical guidelines to the AI industry and to highlight the limits of self-regulation. In what follows, I discuss three institutionalist claims. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  19
    Towards an effective transnational regulation of AI.Daniel J. Gervais - 2023 - AI and Society 38 (1):391-410.
    Law and the legal system through which law is effected are very powerful, yet the power of the law has always been limited by the laws of nature, upon which the law has now direct grip. Human law now faces an unprecedented challenge, the emergence of a second limit on its grip, a new “species” of intelligent agents (AI machines) that can perform cognitive tasks that until recently only humans could. What happens, as a matter of law, when another species (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  11.  11
    China’s New Regulations on Generative AI: Implications for Bioethics.Li Du & Kalina Kamenova - 2023 - American Journal of Bioethics 23 (10):52-54.
    Cohen’s article (2023) on the significance of ChatGPT for bioethics suggests that little is known about the development of generative AI (“GAI”) in China and other national markets. It warns about...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  72
    AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media.Leila Ouchchy, Allen Coin & Veljko Dubljević - 2020 - AI and Society 35 (4):927-936.
    As artificial intelligence technologies become increasingly prominent in our daily lives, media coverage of the ethical considerations of these technologies has followed suit. Since previous research has shown that media coverage can drive public discourse about novel technologies, studying how the ethical issues of AI are portrayed in the media may lead to greater insight into the potential ramifications of this public discourse, particularly with regard to development and regulation of AI. This paper expands upon previous research by systematically (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  13.  15
    Balancing AI and academic integrity: what are the positions of academic publishers and universities?Bashar Haruna Gulumbe, Shuaibu Muhammad Audu & Abubakar Muhammad Hashim - forthcoming - AI and Society:1-10.
    This paper navigates the relationship between the growing influence of Artificial Intelligence (AI) and the foundational principles of academic integrity. It offers an in-depth analysis of how key academic stakeholders—publishers and universities—are crafting strategies and guidelines to integrate AI into the sphere of scholarly work. These efforts are not merely reactionary but are part of a broader initiative to harness AI’s potential while maintaining ethical standards. The exploration reveals a diverse array of stances, reflecting the varied applications of AI in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14.  21
    AI research ethics is in its infancy: the EU’s AI Act can make it a grown-up.Anaïs Resseguier & Fabienne Ufert - 2024 - Research Ethics 20 (2):143-155.
    As the artificial intelligence (AI) ethics field is currently working towards its operationalisation, ethics review as carried out by research ethics committees (RECs) constitutes a powerful, but so far underdeveloped, framework to make AI ethics effective in practice at the research level. This article contributes to the elaboration of research ethics frameworks for research projects developing and/or using AI. It highlights that these frameworks are still in their infancy and in need of a structure and criteria to ensure AI research (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  90
    AI, big data, and the future of consent.Adam J. Andreotta, Nin Kirkham & Marco Rizzi - 2022 - AI and Society 37 (4):1715-1728.
    In this paper, we discuss several problems with current Big data practices which, we claim, seriously erode the role of informed consent as it pertains to the use of personal information. To illustrate these problems, we consider how the notion of informed consent has been understood and operationalised in the ethical regulation of biomedical research (and medical practices, more broadly) and compare this with current Big data practices. We do so by first discussing three types of problems that can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  18. Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology.Piercosma Bisconti, Davide Orsitto, Federica Fedorczyk, Fabio Brau, Marianna Capasso, Lorenzo De Marinis, Hüseyin Eken, Federica Merenda, Mirko Forti, Marco Pacini & Claudia Schettini - 2022 - AI and Society 1 (1):1-10.
    In this paper, we propose a methodology to maximize the benefits of interdisciplinary cooperation in AI research groups. Firstly, we build the case for the importance of interdisciplinarity in research groups as the best means to tackle the social implications brought about by AI systems, against the backdrop of the EU Commission proposal for an Artificial Intelligence Act. As we are an interdisciplinary group, we address the multi-faceted implications of the mass-scale diffusion of AI-driven technologies. The result of our exercise (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  1
    AI, Law and beyond. A transdisciplinary ecosystem for the future of AI & Law.Floris J. Bex - forthcoming - Artificial Intelligence and Law:1-18.
    We live in exciting times for AI and Law: technical developments are moving at a breakneck pace, and at the same time, the call for more robust AI governance and regulation grows stronger. How should we as an AI & Law community navigate these dramatic developments and claims? In this Presidential Address, I present my ideas for a way forward: researching, developing and evaluating real AI systems for the legal field with researchers from AI, Law and beyond. I will (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20.  31
    AI for the public. How public interest theory shifts the discourse on AI.Theresa Züger & Hadi Asghari - 2023 - AI and Society 38 (2):815-828.
    AI for social good is a thriving research topic and a frequently declared goal of AI strategies and regulation. This article investigates the requirements necessary in order for AI to actually serve a public interest, and hence be socially good. The authors propose shifting the focus of the discourse towards democratic governance processes when developing and deploying AI systems. The article draws from the rich history of public interest theory in political philosophy and law, and develops a framework for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  46
    Emotional AI, soft biometrics and the surveillance of emotional life: An unusual consensus on privacy.Andrew McStay - 2020 - Big Data and Society 7 (1).
    By the early 2020s, emotional artificial intelligence will become increasingly present in everyday objects and practices such as assistants, cars, games, mobile phones, wearables, toys, marketing, insurance, policing, education and border controls. There is also keen interest in using these technologies to regulate and optimize the emotional experiences of spaces, such as workplaces, hospitals, prisons, classrooms, travel infrastructures, restaurants, retail and chain stores. Developers frequently claim that their applications do not identify people. Taking the claim at face value, this paper (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  22.  80
    Artificial Intelligence Regulation: a framework for governance.Patricia Gomes Rêgo de Almeida, Carlos Denner dos Santos & Josivania Silva Farias - 2021 - Ethics and Information Technology 23 (3):505-525.
    This article develops a conceptual framework for regulating Artificial Intelligence (AI) that encompasses all stages of modern public policy-making, from the basics to a sustainable governance. Based on a vast systematic review of the literature on Artificial Intelligence Regulation (AIR) published between 2010 and 2020, a dispersed body of knowledge loosely centred around the “framework” concept was organised, described, and pictured for better understanding. The resulting integrative framework encapsulates 21 prior depictions of the policy-making process, aiming to achieve gold-standard (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  23.  52
    Clinical AI: opacity, accountability, responsibility and liability.Helen Smith - 2021 - AI and Society 36 (2):535-545.
    The aim of this literature review was to compose a narrative review supported by a systematic approach to critically identify and examine concerns about accountability and the allocation of responsibility and legal liability as applied to the clinician and the technologist as applied the use of opaque AI-powered systems in clinical decision making. This review questions if it is permissible for a clinician to use an opaque AI system in clinical decision making and if a patient was harmed as a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  24.  53
    Operationalising AI ethics: how are companies bridging the gap between practice and principles? An exploratory study.Javier Camacho Ibáñez & Mónica Villas Olmeda - 2022 - AI and Society 37 (4):1663-1687.
    Despite the increase in the research field of ethics in artificial intelligence, most efforts have focused on the debate about principles and guidelines for responsible AI, but not enough attention has been given to the “how” of applied ethics. This paper aims to advance the research exploring the gap between practice and principles in AI ethics by identifying how companies are applying those guidelines and principles in practice. Through a qualitative methodology based on 22 semi-structured interviews and two focus groups, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  16
    Generative AI Security: Theories and Practices.Ken Huang, Yang Wang, Ben Goertzel, Yale Li, Sean Wright & Jyoti Ponnapalli (eds.) - 2024 - Springer Nature Switzerland.
    This book explores the revolutionary intersection of Generative AI (GenAI) and cybersecurity. It presents a comprehensive guide that intertwines theories and practices, aiming to equip cybersecurity professionals, CISOs, AI researchers, developers, architects and college students with an understanding of GenAI’s profound impacts on cybersecurity. The scope of the book ranges from the foundations of GenAI, including underlying principles, advanced architectures, and cutting-edge research, to specific aspects of GenAI security such as data security, model security, application-level security, and the emerging fields (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  17
    AI-Enhanced Healthcare: Not a new Paradigm for Informed Consent.M. Pruski - forthcoming - Journal of Bioethical Inquiry:1-15.
    With the increasing prevalence of artificial intelligence (AI) and other digital technologies in healthcare, the ethical debate surrounding their adoption is becoming more prominent. Here I consider the issue of gaining informed patient consent to AI-enhanced care from the vantage point of the United Kingdom’s National Health Service setting. I build my discussion around two claims from the World Health Organization: that healthcare services should not be denied to individuals who refuse AI-enhanced care and that there is no precedence to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27. Transparent, explainable, and accountable AI for robotics.Sandra Wachter, Brent Mittelstadt & Luciano Floridi - 2017 - Science (Robotics) 2 (6):eaan6080.
    To create fair and accountable AI and robotics, we need precise regulation and better methods to certify, explain, and audit inscrutable systems.
    Direct download  
     
    Export citation  
     
    Bookmark   23 citations  
  28.  17
    AI as a boss? A national US survey of predispositions governing comfort with expanded AI roles in society.Kate K. Mays, Yiming Lei, Rebecca Giovanetti & James E. Katz - 2022 - AI and Society 37 (4):1587-1600.
    People’s comfort with and acceptability of artificial intelligence (AI) instantiations is a topic that has received little systematic study. This is surprising given the topic’s relevance to the design, deployment and even regulation of AI systems. To help fill in our knowledge base, we conducted mixed-methods analysis based on a survey of a representative sample of the US population (_N_ = 2254). Results show that there are two distinct social dimensions to comfort with AI: as a peer and as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  42
    AI Case Studies: Potential for Human Health, Space Exploration and Colonisation and a Proposed Superimposition of the Kubler-Ross Change Curve on the Hype Cycle.Martin Braddock & Matthew Williams - 2019 - Studia Humana 8 (1):3-18.
    The development and deployment of artificial intelligence (AI) is and will profoundly reshape human society, the culture and the composition of civilisations which make up human kind. All technological triggers tend to drive a hype curve which over time is realised by an output which is often unexpected, taking both pessimistic and optimistic perspectives and actions of drivers, contributors and enablers on a journey where the ultimate destination may be unclear. In this paper we hypothesise that this journey is not (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  30. Basic issues in AI policy.Vincent C. Müller - 2022 - In Maria Amparo Grau-Ruiz (ed.), Interactive robotics: Legal, ethical, social and economic aspects. Springer. pp. 3-9.
    This extended abstract summarises some of the basic points of AI ethics and policy as they present themselves now. We explain the notion of AI, the main ethical issues in AI and the main policy aims and means.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31.  7
    Robotics, AI and the Future of Law.Marcelo Corrales Compagnucci, Mark Fenwick & Nikolaus Forgó (eds.) - 2018 - Singapore: Imprint: Springer.
    Artificial intelligence and related technologies are changing both the law and the legal profession. In particular, technological advances in fields ranging from machine learning to more advanced robots, including sensors, virtual realities, algorithms, bots, drones, self-driving cars, and more sophisticated "human-like" robots are creating new and previously unimagined challenges for regulators. These advances also give rise to new opportunities for legal professionals to make efficiency gains in the delivery of legal services. With the exponential growth of such technologies, radical disruption (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  32.  22
    Bias in algorithms of AI systems developed for COVID-19: A scoping review.Janet Delgado, Alicia de Manuel, Iris Parra, Cristian Moyano, Jon Rueda, Ariel Guersenzvaig, Txetxu Ausin, Maite Cruz, David Casacuberta & Angel Puyol - 2022 - Journal of Bioethical Inquiry 19 (3):407-419.
    To analyze which ethically relevant biases have been identified by academic literature in artificial intelligence algorithms developed either for patient risk prediction and triage, or for contact tracing to deal with the COVID-19 pandemic. Additionally, to specifically investigate whether the role of social determinants of health have been considered in these AI developments or not. We conducted a scoping review of the literature, which covered publications from March 2020 to April 2021. ​Studies mentioning biases on AI algorithms developed for contact (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  33.  20
    Trustworthy AI: AI made in Germany and Europe?Hartmut Hirsch-Kreinsen & Thorben Krokowski - forthcoming - AI and Society:1-11.
    As the capabilities of artificial intelligence (AI) continue to expand, concerns are also growing about the ethical and social consequences of unregulated development and, above all, use of AI systems in a wide range of social areas. It is therefore indisputable that the application of AI requires social standardization and regulation. For years, innovation policy measures and the most diverse activities of European and German institutions have been directed toward this goal. Under the label “Trustworthy AI” (TAI), a promise (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34.  42
    AI and the Law: Can Legal Systems Help Us Maximize Paperclips while Minimizing Deaths?Mihailis E. Diamantis, Rebekah Cochran & Miranda Dam - forthcoming - In Technology Ethics: A Philosophical Introduction and Readings.
    This Chapter provides a short undergraduate introduction to ethical and philosophical complexities surrounding the law’s attempt (or lack thereof) to regulate artificial intelligence. -/- Swedish philosopher Nick Bostrom proposed a simple thought experiment known as the paperclip maximizer. What would happen if a machine (the “PCM”) were given the sole goal of manufacturing as many paperclips as possible? It might learn how to transact money, source metal, or even build factories. The machine might also eventually realize that humans pose a (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  35.  8
    Morphogenetic Régulation in action: understanding inclusive governance, neoliberalizing processes in Palestine, and the political economy of the contemporary internet.Andrew Dryhurst, Daniel ‘Zach’ Sloman & Yazid Zahda - 2023 - Journal of Critical Realism 22 (5):813-839.
    The Morphogenetic Régulation approach (MR) contributes to the Morphogenetic Approach by explaining the material and ideational origins of change and stasis in agency, structure, and culture. In this paper, we focus on the expressive quality of ideas and systemic persistence in three research projects. The first demystifies inclusive governance and its adverse impacts. It shows how, contrary to institutions of governance, inclusiveness is not simply a norm but actually the explication of corporate agents’ ideas about rational choice institutionalism which leads (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  4
    The EU Artificial Intelligence Act: Regulating Subliminal AI Systems The EU Artificial Intelligence Act: Regulating Subliminal AI Systems, by Rostam J. Neuwirth, London, Routledge, 2023, xiii + 129 pp., £48.99 (cloth). [REVIEW]Zhonghua Wu & Le Cheng - forthcoming - The European Legacy:1-3.
    With the rapid advances in science and technology, Artificial Intelligence (AI) has been developing exponentially and transforming the world in ways we could never have envisioned. Its applications...
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  12
    AI models and the future of genomic research and medicine: True sons of knowledge?Harald König, Daniel Frank, Martina Baumann & Reinhard Heil - 2021 - Bioessays 43 (10):2100025.
    The increasing availability of large‐scale, complex data has made research into how human genomes determine physiology in health and disease, as well as its application to drug development and medicine, an attractive field for artificial intelligence (AI) approaches. Looking at recent developments, we explore how such approaches interconnect and may conflict with needs for and notions of causal knowledge in molecular genetics and genomic medicine. We provide reasons to suggest that—while capable of generating predictive knowledge at unprecedented pace and scale—if (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - manuscript
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores how (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  39. Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  40.  20
    The regulation of artificial intelligence.Giusella Finocchiaro - forthcoming - AI and Society:1-8.
    Before embarking on a discussion of the regulation of artificial intelligence (AI), it is first necessary to define the subject matter regulated. Defining artificial intelligence is a difficult endeavour, and many definitions have been proposed over the years. Although more than 70 years have passed since it was adopted, the most convincing definition is still nonetheless that proposed by Turing; in any case, it is important to be mindful of the risk of anthropomorphising artificial intelligence, which may arise in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41. Mapping the Stony Road toward Trustworthy AI: Expectations, Problems, Conundrums.Gernot Rieder, Judith Simon & Pak-Hang Wong - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust: Perspectives on Dependable AI. Cambridge, Mass.:
    The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine AI (...). In this chapter, we engage in a critical discussion of the concept of trustworthy AI by probing the concept both on theoretical and practical grounds, assessing its substance and the feasibility of its intent. We offer a concise overview of the guidelines and their vision for trustworthy AI and examine the conceptual underpinnings of trustworthy AI by considering how notions of 'trust' and 'trustworthiness' have been discussed in the philosophical literature. We then discuss several epistemic obstacles and moral requirements when striving to achieve trustworthy AI in practice before concluding with an argument in support of the establishment of a trustworthy AI culture that respects and protects foundational values. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  22
    AI ethics with Chinese characteristics? Concerns and preferred solutions in Chinese academia.Junhua Zhu - forthcoming - AI and Society:1-14.
    Since Chinese scholars are playing an increasingly important role in shaping the national landscape of discussion on AI ethics, understanding their ethical concerns and preferred solutions is essential for global cooperation on governance of AI. This article, therefore, provides the first elaborated analysis on the discourse on AI ethics in Chinese academia, via a systematic literature review. This article has three main objectives. to identify the most discussed ethical issues of AI in Chinese academia and those being left out ; (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43. The European legislation on AI: a brief analysis of its philosophical approach.Luciano Floridi - 2021 - Philosophy and Technology 34 (2):215–⁠222.
    On 21 April 2021, the European Commission published the proposal of the new EU Artificial Intelligence Act (AIA) — one of the most influential steps taken so far to regulate AI internationally. This article highlights some foundational aspects of the Act and analyses the philosophy behind its proposal.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  44.  12
    AI and suicide risk prediction: Facebook live and its aftermath.Dolores Peralta - forthcoming - AI and Society:1-13.
    As suicide rates increase worldwide, the mental health industry has reached an impasse in attempts to assess patients, predict risk, and prevent suicide. Traditional assessment tools are no more accurate than chance, prompting the need to explore new avenues in artificial intelligence (AI). Early studies into these tools show potential with higher accuracy rates than previous methods alone. Medical researchers, computer scientists, and social media companies are exploring these avenues. While Facebook leads the pack, its efforts stem from scrutiny following (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45.  17
    The selective deployment of AI in healthcare.Robert Vandersluis & Julian Savulescu - 2024 - Bioethics 38 (5):391-400.
    Machine‐learning algorithms have the potential to revolutionise diagnostic and prognostic tasks in health care, yet algorithmic performance levels can be materially worse for subgroups that have been underrepresented in algorithmic training data. Given this epistemic deficit, the inclusion of underrepresented groups in algorithmic processes can result in harm. Yet delaying the deployment of algorithmic systems until more equitable results can be achieved would avoidably and foreseeably lead to a significant number of unnecessary deaths in well‐represented populations. Faced with this dilemma (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  5
    Ai Development and the ‘Fuzzy Logic' of Chinese Cyber Security and Data Laws.Max Parasol - 2021 - Cambridge University Press.
    The book examines the extent to which Chinese cyber and network security laws and policies act as a constraint on the emergence of Chinese entrepreneurialism and innovation. Specifically, how the contradictions and tensions between data localisation laws affect innovation in artificial intelligence. The book surveys the globalised R&D networks, and how the increasing use of open-source platforms by leading Chinese AI firms during 2017–2020, exacerbated the apparent contradiction between Network Sovereignty and Chinese innovation. The drafting of the Cyber Security Law (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  47.  11
    The right to a second opinion on Artificial Intelligence diagnosis—Remedying the inadequacy of a risk‐based regulation.Thomas Ploug & Søren Holm - 2022 - Bioethics 37 (3):303-311.
    In this paper, we argue that patients who are subjects of Artificial Intelligence (AI)-supported diagnosis and treatment planning should have a right to a second opinion, but also that this right should not necessarily be construed as a right to a physician opinion. The right to a second opinion could potentially be satisfied by another independent AI system. Our considerations on the right to second opinion are embedded in the wider debate on different approaches to the regulation of AI, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48.  56
    The Oxford Handbook of AI Governance.Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.) - 2023 - Oxford University Press.
    As the capabilities of Artificial Intelligence (AI) have increased over recent years, so have the challenges of how to govern its usage. Consequently, prominent stakeholders across academia, government, industry, and civil society have called for states to devise and deploy principles, innovative policies, and best practices to regulate and oversee these increasingly powerful AI tools. Developing a robust AI governance system requires extensive collective efforts throughout the world. It also raises old questions of politics, democracy, and administration, but with the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  49. Varieties of transparency: exploring agency within AI systems.Gloria Andrada, Robert William Clowes & Paul Smart - 2023 - AI and Society 38 (4):1321-1331.
    AI systems play an increasingly important role in shaping and regulating the lives of millions of human beings across the world. Calls for greater _transparency_ from such systems have been widespread. However, there is considerable ambiguity concerning what “transparency” actually means, and therefore, what greater transparency might entail. While, according to some debates, transparency requires _seeing through_ the artefact or device, widespread calls for transparency imply _seeing into_ different aspects of AI systems. These two notions are in apparent tension with (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  50.  23
    Promoting responsible AI : A European perspective on the governance of artificial intelligence in media and journalism.Colin Porlezza - 2023 - Communications 48 (3):370-394.
    Artificial intelligence and automation have become pervasive in news media, influencing journalism from news gathering to news distribution. As algorithms are increasingly determining editorial decisions, specific concerns have been raised with regard to the responsible and accountable use of AI-driven tools by news media, encompassing new regulatory and ethical questions. This contribution aims to analyze whether and to what extent the use of AI technology in news media and journalism is currently regulated and debated within the European Union and the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 985