Results for 'Trustworthy AI'

995 found
Order:
  1.  25
    Trustworthy AI: a plea for modest anthropocentrism.Rune Nyrup - 2023 - Asian Journal of Philosophy 2 (2):1-10.
    Simion and Kelp defend a non-anthropocentric account of trustworthy AI, based on the idea that the obligations of AI systems should be sourced in purely functional norms. In this commentary, I highlight some pressing counterexamples to their account, involving AI systems that reliably fulfil their functions but are untrustworthy because those functions are antagonistic to the interests of the trustor. Instead, I outline an alternative account, based on the idea that AI systems should not be considered primarily as tools (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  20
    Trustworthy AI: AI made in Germany and Europe?Hartmut Hirsch-Kreinsen & Thorben Krokowski - forthcoming - AI and Society:1-11.
    As the capabilities of artificial intelligence (AI) continue to expand, concerns are also growing about the ethical and social consequences of unregulated development and, above all, use of AI systems in a wide range of social areas. It is therefore indisputable that the application of AI requires social standardization and regulation. For years, innovation policy measures and the most diverse activities of European and German institutions have been directed toward this goal. Under the label “Trustworthy AI” (TAI), a promise (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3. Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice.Marie Oldfield - 2021 - AI and Ethics 1 (1):1.
    AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  67
    Simion and Kelp on trustworthy AI.J. Adam Carter - 2023 - Asian Journal of Philosophy 2 (1):1-8.
    AbstractSimion and Kelp offer a prima facie very promising account of trustworthy AI. One benefit of the account is that it elegantly explains trustworthiness in the case of cancer diagnostic AIs, which involve the acquisition by the AI of a representational etiological function. In this brief note, I offer some reasons to think that their account cannot be extended — at least not straightforwardly — beyond such cases (i.e., to cases of AIs with non-representational etiological functions) without incurring the (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Mapping the Stony Road toward Trustworthy AI: Expectations, Problems, Conundrums.Gernot Rieder, Judith Simon & Pak-Hang Wong - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust: Perspectives on Dependable AI. Cambridge, Mass.:
    The notion of trustworthy AI has been proposed in response to mounting public criticism of AI systems, in particular with regard to the proliferation of such systems into ever more sensitive areas of human life without proper checks and balances. In Europe, the High-Level Expert Group on Artificial Intelligence has recently presented its Ethics Guidelines for Trustworthy AI. To some, the guidelines are an important step for the governance of AI. To others, the guidelines distract effort from genuine (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  8
    Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI.Stephanie Sheir, Arianna Manzini, Helen Smith & Jonathan Ives - forthcoming - AI and Society:1-14.
    Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology’s trustworthiness, a developer’s trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for ‘the good’, illustrating the moral heights to which technologies and developers ought to aspire, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7.  43
    A Leap of Faith: Is There a Formula for “Trustworthy” AI?Matthias Braun, Hannah Bleher & Patrik Hummel - 2021 - Hastings Center Report 51 (3):17-22.
    Trust is one of the big buzzwords in debates about the shaping of society, democracy, and emerging technologies. For example, one prominent idea put forward by the High‐Level Expert Group on Artificial Intelligence appointed by the European Commission is that artificial intelligence should be trustworthy. In this essay, we explore the notion of trust and argue that both proponents and critics of trustworthy AI have flawed pictures of the nature of trust. We develop an approach to understanding trust (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  8.  41
    Keep trusting! A plea for the notion of Trustworthy AI.Giacomo Zanotti, Mattia Petrolo, Daniele Chiffi & Viola Schiaffonati - forthcoming - AI and Society:1-12.
    A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee’s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  1
    Quasi-Metacognitive Machines: Why We Don’t Need Morally Trustworthy AI and Communicating Reliability is Enough.John Dorsch & Ophelia Deroy - 2024 - Philosophy and Technology 37 (2):1-21.
    Many policies and ethical guidelines recommend developing “trustworthy AI”. We argue that developing morally trustworthy AI is not only unethical, as it promotes trust in an entity that cannot be trustworthy, but it is also unnecessary for optimal calibration. Instead, we show that reliability, exclusive of moral trust, entails the appropriate normative constraints that enable optimal calibration and mitigate the vulnerability that arises in high-stakes hybrid decision-making environments, without also demanding, as moral trust would, the anthropomorphization of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  10. Establishing the rules for building trustworthy AI.Luciano Floridi - 2019 - Nature Machine Intelligence 1 (6):261-262.
    AI is revolutionizing everyone’s life, and it is crucial that it does so in the right way. AI’s profound and far-reaching potential for transformation concerns the engineering of systems that have some degree of autonomous agency. This is epochal and requires establishing a new, ethical balance between human and artificial autonomy.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  11. Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  12. Ethics-based auditing to develop trustworthy AI.Jakob Mökander & Luciano Floridi - 2021 - Minds and Machines 31 (2):323–327.
    A series of recent developments points towards auditing as a promising mechanism to bridge the gap between principles and practice in AI ethics. Building on ongoing discussions concerning ethics-based auditing, we offer three contributions. First, we argue that ethics-based auditing can improve the quality of decision making, increase user satisfaction, unlock growth potential, enable law-making, and relieve human suffering. Second, we highlight current best practices to support the design and implementation of ethics-based auditing: To be feasible and effective, ethics-based auditing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  13.  26
    Network of AI and trustworthy: response to Simion and Kelp’s account of trustworthy AI.Fei Song - 2023 - Asian Journal of Philosophy 2 (2):1-8.
    Simion and Kelp develop the obligation-based account of trustworthiness as a compelling general account of trustworthiness and then apply this account to various instances of AI. By doing so, they explain in what way any AI can be considered trustworthy, as per the general account. Simion and Kelp identify that any account of trustworthiness that relies on assumptions of agency that are too anthropocentric, such as that being trustworthy, must involve goodwill. I argue that goodwill is a necessary (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14.  45
    Access to Artificial Intelligence for Persons with Disabilities: Legal and Ethical Questions Concerning the Application of Trustworthy AI.Kristi Joamets & Archil Chochia - 2021 - Acta Baltica Historiae Et Philosophiae Scientiarum 9 (1):51-66.
    Digitalisation and emerging technologies affect our lives and are increasingly present in a growing number of fields. Ethical implications of the digitalisation process have therefore long been discussed by the scholars. The rapid development of artificial intelligence has taken the legal and ethical discussion to another level. There is no doubt that AI can have a positive impact on the society. The focus here, however, is on its more negative impact. This article will specifically consider how the law and ethics (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  1
    Deconstructing controversies to design a trustworthy AI future.Francesca Trevisan, Pinelopi Troullinou, Dimitris Kyriazanos, Evan Fisher, Paola Fratantoni, Claire Morot Sir & Virginia Bertelli - 2024 - Ethics and Information Technology 26 (2):1-15.
    Technology policy needs to be receptive to different social needs and realities to ensure that innovations are both ethically developed and accessible. This article proposes a new method to integrate social controversies into foresight scenarios as a means to enhance the trustworthiness and inclusivity of policymaking around Artificial Intelligence. Foresight exercises are used to anticipate future tech challenges and to inform policy development. However, the integration of social controversies within these exercises remains an unexplored area. This article aims to bridge (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16.  11
    Autonomy-Oriented Justification of Norms for Trustworthy AI Systems in the Age of Global Digitalization.Bernhard Jakl - 2023 - Archiv für Rechts- und Sozialphilosophie 109 (4):443-463.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  73
    Trustworthy medical AI systems need to know when they don’t know.Thomas Grote - forthcoming - Journal of Medical Ethics.
    There is much to learn from Durán and Jongsma’s paper.1 One particularly important insight concerns the relationship between epistemology and ethics in medical artificial intelligence. In clinical environments, the task of AI systems is to provide risk estimates or diagnostic decisions, which then need to be weighed by physicians. Hence, while the implementation of AI systems might give rise to ethical issues—for example, overtreatment, defensive medicine or paternalism2—the issue that lies at the heart is an epistemic problem: how can physicians (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  18.  24
    Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.Angeliki Kerasidou, Antoniya Georgieva & Rachel Dlugatch - 2023 - BMC Medical Ethics 24 (1):1-16.
    BackgroundDespite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness.MethodsSeventeen semi-structured interviews were conducted with birth (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19. The trustworthiness of AI: Comments on Simion and Kelp’s account.Dong-Yong Choi - 2023 - Asian Journal of Philosophy 2 (1):1-9.
    Simion and Kelp explain the trustworthiness of an AI based on that AI’s disposition to meet its obligations. Roughly speaking, according to Simion and Kelp, an AI is trustworthy regarding its task if and only if that AI is obliged to complete the task and its disposition to complete the task is strong enough. Furthermore, an AI is obliged to complete a task in the case where the task is the AI’s etiological function or design function. This account has (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20.  11
    Practicing trustworthy machine learning: consistent, transparent, and fair AI pipelines.Yada Pruksachatkun - 2022 - Boston: O'Reilly. Edited by Matthew McAteer & Subhabrata Majumdar.
    With the increasing use of AI in high-stakes domains such as medicine, law, and defense, organizations spend a lot of time and money to make ML models trustworthy. Many books on the subject offer deep dives into theories and concepts. This guide provides a practical starting point to help development teams produce models that are secure, more robust, less biased, and more explainable. Authors Yada Pruksachatkun, Matthew McAteer, and Subhabrata Majumdar translate best practices in the academic literature for curating (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  21.  14
    Dual-Use and Trustworthy? A Mixed Methods Analysis of AI Diffusion Between Civilian and Defense R&D.Christian Reuter, Thea Riebe & Stefka Schmid - 2022 - Science and Engineering Ethics 28 (2):1-23.
    Artificial Intelligence (AI) seems to be impacting all industry sectors, while becoming a motor for innovation. The diffusion of AI from the civilian sector to the defense sector, and AI’s dual-use potential has drawn attention from security and ethics scholars. With the publication of the ethical guideline Trustworthy AI by the European Union (EU), normative questions on the application of AI have been further evaluated. In order to draw conclusions on Trustworthy AI as a point of reference for (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22.  34
    Justifying our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach.Andrea Ferrario - manuscript
    We address an open problem in the epistemology of artificial intelligence (AI), namely, the justification of the epistemic attitudes we have towards the trustworthiness of AI systems. We start from a key consideration: the trustworthiness of an AI is a time-relative property of the system, with two distinct facets. One is the actual trustworthiness of the AI, and the other is the perceived trustworthiness of the system as assessed by its users while interacting with it. We show that credences, namely, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  23.  65
    Trustworthy artificial intelligence.Mona Simion & Christoph Kelp - 2020 - Asian Journal of Philosophy 2 (1):1-12.
    This paper develops an account of trustworthy AI. Its central idea is that whether AIs are trustworthy is a matter of whether they live up to their function-based obligations. We argue that this account serves to advance the literature in a couple of important ways. First, it serves to provide a rationale for why a range of properties that are widely assumed in the scientific literature, as well as in policy, to be required of trustworthy AI, such (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  24. In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important and defining (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  25.  98
    (E)‐Trust and Its Function: Why We Shouldn't Apply Trust and Trustworthiness to Human–AI Relations.Pepijn Al - 2023 - Journal of Applied Philosophy 40 (1):95-108.
    With an increasing use of artificial intelligence (AI) systems, theorists have analyzed and argued for the promotion of trust in AI and trustworthy AI. Critics have objected that AI does not have the characteristics to be an appropriate subject for trust. However, this argumentation is open to counterarguments. Firstly, rejecting trust in AI denies the trust attitudes that some people experience. Secondly, we can trust other non‐human entities, such as animals and institutions, so why can we not trust AI (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26.  21
    A method for ethical AI in defence: A case study on developing trustworthy autonomous systems.Tara Roberson, Stephen Bornstein, Rain Liivoja, Simon Ng, Jason Scholz & Kate Devitt - 2022 - Journal of Responsible Technology 11:100036.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27.  13
    Review of “AI assurance: towards trustworthy, explainable, safe, and ethical AI” by Feras A. Batarseh and Laura J. Freeman, Academic Press, 2023. [REVIEW]Jialei Wang & Li Fu - forthcoming - AI and Society:1-2.
  28.  13
    Questioning the Role of Moral AI as an Adviser within the Framework of Trustworthiness Ethics.Silviya Serafimova - 2021 - Filosofiya-Philosophy 30 (4):402-412.
    The main objective of this article is to demonstrate why despite the growing interest in justifying AI’s trustworthiness, one can argue for AI’s reliability. By analyzing why trustworthiness ethics in Nickel’s sense provides some wellgrounded hints for rethinking the rational, affective and normative accounts of trust in respect to AI, I examine some concerns about the trustworthiness of Savulescu and Maslen’s model of moral AI as an adviser. Specifically, I tackle one of its exemplifications regarding Klincewicz’s hypothetical scenario of John (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  43
    An AI ethics ‘David and Goliath’: value conflicts between large tech companies and their employees.Mark Ryan, Eleni Christodoulou, Josephina Antoniou & Kalypso Iordanou - forthcoming - AI and Society:1-16.
    Artificial intelligence ethics requires a united approach from policymakers, AI companies, and individuals, in the development, deployment, and use of these technologies. However, sometimes discussions can become fragmented because of the different levels of governance or because of different values, stakeholders, and actors involved. Recently, these conflicts became very visible, with such examples as the dismissal of AI ethics researcher Dr. Timnit Gebru from Google and the resignation of whistle-blower Frances Haugen from Facebook. Underpinning each debacle was a conflict between (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  31.  16
    Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics.Charles Rathkopf & Bert Heinrichs - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-13.
    Position papers on artificial intelligence (AI) ethics are often framed as attempts to work out technical and regulatory strategies for attaining what is commonly called trustworthy AI. In such papers, the technical and regulatory strategies are frequently analyzed in detail, but the concept of trustworthy AI is not. As a result, it remains unclear. This paper lays out a variety of possible interpretations of the concept and concludes that none of them is appropriate. The central problem is that, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Capable but Amoral? Comparing AI and Human Expert Collaboration in Ethical Decision Making.Suzanne Tolmeijer, Markus Christen, Serhiy Kandul, Markus Kneer & Abraham Bernstein - 2022 - Proceedings of the 2022 Chi Conference on Human Factors in Computing Systems 160:160:1–17.
    While artificial intelligence (AI) is increasingly applied for decision-making processes, ethical decisions pose challenges for AI applications. Given that humans cannot always agree on the right thing to do, how would ethical decision-making by AI systems be perceived and how would responsibility be ascribed in human-AI collaboration? In this study, we investigate how the expert type (human vs. AI) and level of expert autonomy (adviser vs. decider) influence trust, perceived responsibility, and reliance. We find that participants consider humans to be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can be derived (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  34.  83
    In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  35. From the Ground Truth Up: Doing AI Ethics from Practice to Principles.James Brusseau - 2022 - AI and Society 37 (1):1-7.
    Recent AI ethics has focused on applying abstract principles downward to practice. This paper moves in the other direction. Ethical insights are generated from the lived experiences of AI-designers working on tangible human problems, and then cycled upward to influence theoretical debates surrounding these questions: 1) Should AI as trustworthy be sought through explainability, or accurate performance? 2) Should AI be considered trustworthy at all, or is reliability a preferable aim? 3) Should AI ethics be oriented toward establishing (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  36.  36
    Rethinking ethics in AI policy: a method for synthesising Graham’s critical discourse analysis approaches and the philosophical study of valuation.Nadira Talib - forthcoming - Critical Discourse Studies.
    Here I use aspects of Phil Graham’s discourse analytical work to examine forms of e/valuations and critically analyse the formulation of truths in the constitution of Artificial Intelligence (hereafter, AI). This paper focuses on two 2019 documents: Ethics guidelines for trustworthy AI (AI HLEG, Citation2019a) and Policy and investment recommendations for trustworthy AI (AI HLEG, Citation2019b). My aim here is to provide a timely contribution to contemporary philosophical–methodological innovations in documenting the constellation of values that are prefigured in (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  22
    AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):185-186.
    In our recent article ‘The Ethics of the Algorithmic Prediction of Goal of Care Preferences: From Theory to Practice’1, we aimed to ignite a critical discussion on why and how to design artificial intelligence (AI) systems assisting clinicians and next-of-kin by predicting goal of care preferences for incapacitated patients. Here, we would like to thank the commentators for their valuable responses to our work. We identified three core themes in their commentaries: (1) the risks of AI paternalism, (2) worries about (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38.  44
    Implementing Ethics in Healthcare AI-Based Applications: A Scoping Review.Robyn Clay-Williams, Elizabeth Austin & Magali Goirand - 2021 - Science and Engineering Ethics 27 (5):1-53.
    A number of Artificial Intelligence (AI) ethics frameworks have been published in the last 6 years in response to the growing concerns posed by the adoption of AI in different sectors, including healthcare. While there is a strong culture of medical ethics in healthcare applications, AI-based Healthcare Applications (AIHA) are challenging the existing ethics and regulatory frameworks. This scoping review explores how ethics frameworks have been implemented in AIHA, how these implementations have been evaluated and whether they have been successful. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  39.  17
    Toward trustworthy programming for autonomous concurrent systems.Lavindra de Silva & Alan Mycroft - 2023 - AI and Society 38 (2):963-965.
  40.  18
    Can robots be trustworthy?Ines Schröder, Oliver Müller, Helena Scholl, Shelly Levy-Tzedek & Philipp Kellmeyer - 2023 - Ethik in der Medizin 35 (2):221-246.
    Definition of the problem This article critically addresses the conceptualization of trust in the ethical discussion on artificial intelligence (AI) in the specific context of social robots in care. First, we attempt to define in which respect we can speak of ‘social’ robots and how their ‘social affordances’ affect the human propensity to trust in human–robot interaction. Against this background, we examine the use of the concept of ‘trust’ and ‘trustworthiness’ with respect to the guidelines and recommendations of the High-Level (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41.  6
    Modelling Accuracy and Trustworthiness of Explaining Agents.Alberto Termine, Giuseppe Primiero & Fabio Aurelio D’Asaro - 2021 - In Sujata Ghosh & Thomas Icard (eds.), Logic, Rationality, and Interaction: 8th International Workshop, Lori 2021, Xi’an, China, October 16–18, 2021, Proceedings. Springer Verlag. pp. 232-245.
    Current research in Explainable AI includes post-hoc explanation methods that focus on building transparent explaining agents able to emulate opaque ones. Such agents are naturally required to be accurate and trustworthy. However, what it means for an explaining agent to be accurate and trustworthy is far from being clear. We characterize accuracy and trustworthiness as measures of the distance between the formal properties of a given opaque system and those of its transparent explanantes. To this aim, we extend (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  42.  39
    How to teach responsible AI in Higher Education: challenges and opportunities.Andrea Aler Tubella, Marçal Mora-Cantallops & Juan Carlos Nieves - 2023 - Ethics and Information Technology 26 (1):1-14.
    In recent years, the European Union has advanced towards responsible and sustainable Artificial Intelligence (AI) research, development and innovation. While the Ethics Guidelines for Trustworthy AI released in 2019 and the AI Act in 2021 set the starting point for a European Ethical AI, there are still several challenges to translate such advances into the public debate, education and practical learning. This paper contributes towards closing this gap by reviewing the approaches that can be found in the existing literature (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  14
    AI for crisis decisions.Tina Comes - 2024 - Ethics and Information Technology 26 (1):1-14.
    Increasingly, our cities are confronted with crises. Fuelled by climate change and a loss of biodiversity, increasing inequalities and fragmentation, challenges range from social unrest and outbursts of violence to heatwaves, torrential rainfall, or epidemics. As crises require rapid interventions that overwhelm human decision-making capacity, AI has been portrayed as a potential avenue to support or even automate decision-making. In this paper, I analyse the specific challenges of AI in urban crisis management as an example and test case for many (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44.  62
    Transparent AI: reliabilist and proud.Abhishek Mishra - forthcoming - Journal of Medical Ethics.
    Durán et al argue in ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’1 that traditionally proposed solutions to make black box machine learning models in medicine less opaque and more transparent are, though necessary, ultimately not sufficient to establish their overall trustworthiness. This is because transparency procedures currently employed, such as the use of an interpretable predictor,2 cannot fully overcome the opacity of such models. Computational reliabilism, an alternate approach to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45.  42
    Responsibility of AI Systems.Mehdi Dastani & Vahid Yazdanpanah - 2023 - AI and Society 38 (2):843-852.
    To support the trustworthiness of AI systems, it is essential to have precise methods to determine what or who is to account for the behaviour, or the outcome, of AI systems. The assignment of responsibility to an AI system is closely related to the identification of individuals or elements that have caused the outcome of the AI system. In this work, we present an overview of approaches that aim at modelling responsibility of AI systems, discuss their advantages and shortcomings to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46.  49
    SAT: a methodology to assess the social acceptance of innovative AI-based technologies.Carmela Occhipinti, Antonio Carnevale, Luigi Briguglio, Andrea Iannone & Piercosma Bisconti - 2022 - Journal of Information, Communication and Ethics in Society 1 (In press).
    Purpose The purpose of this paper is to present the conceptual model of an innovative methodology (SAT) to assess the social acceptance of technology, especially focusing on artificial intelligence (AI)-based technology. -/- Design/methodology/approach After a review of the literature, this paper presents the main lines by which SAT stands out from current methods, namely, a four-bubble approach and a mix of qualitative and quantitative techniques that offer assessments that look at technology as a socio-technical system. Each bubble determines the social (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  97
    A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - forthcoming - Ethics and Information Technology.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative AI (Artificial Intelligence) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  48.  16
    Public perception of military AI in the context of techno-optimistic society.Eleri Lillemäe, Kairi Talves & Wolfgang Wagner - forthcoming - AI and Society:1-15.
    In this study, we analyse the public perception of military AI in Estonia, a techno-optimistic country with high support for science and technology. This study involved quantitative survey data from 2021 on the public’s attitudes towards AI-based technology in general, and AI in developing and using weaponised unmanned ground systems (UGS) in particular. UGS are a technology that has been tested in militaries in recent years with the expectation of increasing effectiveness and saving manpower in dangerous military tasks. However, developing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  50.  15
    Clinicians and AI use: where is the professional guidance?Helen Smith, John Downer & Jonathan Ives - forthcoming - Journal of Medical Ethics.
    With the introduction of artificial intelligence (AI) to healthcare, there is also a need for professional guidance to support its use. New (2022) reports from National Health Service AI Lab & Health Education England focus on healthcare workers’ understanding and confidence in AI clinical decision support systems (AI-CDDSs), and are concerned with developing trust in, and the trustworthiness of these systems. While they offer guidance to aid developers and purchasers of such systems, they offer little specific guidance for the clinical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 995