Results for 'AI Risk'

997 found
Order:
  1. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  2. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  3.  4
    AI-Related Risk: An Epistemological Approach.Giacomo Zanotti, Daniele Chiffi & Viola Schiaffonati - 2024 - Philosophy and Technology 37 (2):1-18.
    Risks connected with AI systems have become a recurrent topic in public and academic debates, and the European proposal for the AI Act explicitly adopts a risk-based tiered approach that associates different levels of regulation with different levels of risk. However, a comprehensive and general framework to think about AI-related risk is still lacking. In this work, we aim to provide an epistemological analysis of such risk building upon the existing literature on disaster risk analysis (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to be (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  5. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  6.  7
    Matched design for marginal causal effect on restricted mean survival time in observational studies.Bo Lu, Ai Ni & Zihan Lin - 2023 - Journal of Causal Inference 11 (1).
    Investigating the causal relationship between exposure and time-to-event outcome is an important topic in biomedical research. Previous literature has discussed the potential issues of using hazard ratio (HR) as the marginal causal effect measure due to noncollapsibility. In this article, we advocate using restricted mean survival time (RMST) difference as a marginal causal effect measure, which is collapsible and has a simple interpretation as the difference of area under survival curves over a certain time horizon. To address both measured and (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  7.  2
    AI and the falling sky: interrogating X-Risk.Nancy S. Jecker, Caesar Alimsinya Atuire, Jean-Christophe Bélisle-Pipon, Vardit Ravitsky & Anita Ho - forthcoming - Journal of Medical Ethics.
    The Buddhist Jātaka tells the tale of a hare lounging under a palm tree who becomes convinced the Earth is coming to an end when a ripe bael fruit falls on its head. Soon all the hares are running; other animals join them, forming a stampede of deer, boar, elk, buffalo, wild oxen, rhinoceros, tigers and elephants, loudly proclaiming the earth is ending.1 In the American retelling, the hare is ‘chicken little,’ and the exaggerated fear is that the sky is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  93
    Current cases of AI misalignment and their implications for future risks.Leonard Dung - 2023 - Synthese 202 (5):1-23.
    How can one build AI systems such that they pursue the goals their designers want them to pursue? This is the alignment problem. Numerous authors have raised concerns that, as research advances and systems become more powerful over time, misalignment might lead to catastrophic outcomes, perhaps even to the extinction or permanent disempowerment of humanity. In this paper, I analyze the severity of this risk based on current instances of misalignment. More specifically, I argue that contemporary large language models (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  9. Medical AI, Inductive Risk, and the Communication of Uncertainty: The Case of Disorders of Consciousness.Jonathan Birch - forthcoming - Journal of Medical Ethics.
    Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is “cognitive-motor dissociation” (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram (EEG) recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient’s family, because this information may confuse, alarm and mislead. Instead, we need a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  80
    AI Deception: A Survey of Examples, Risks, and Potential Solutions.Peter Park, Simon Goldstein, Aidan O'Gara, Michael Chen & Dan Hendrycks - manuscript
    This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) built for specific competitive situations, and general-purpose AI systems (such as large language models). Next, we detail several risks from AI deception, such as fraud, election tampering, and losing (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  11.  34
    Fairness and accountability of AI in disaster risk management: Opportunities and challenges.Caroline Gevaert, Mary Carman, Benjamin Rosman, Yola Georgiadou & Robert Soden - 2021 - Patterns 11 (2).
    Artificial Intelligence (AI) is increasingly being used in disaster risk management applications to predict the effect of upcoming disasters, plan for mitigation strategies, and determine who needs how much aid after a disaster strikes. The media is filled with unintended ethical concerns of AI algorithms, such as image recognition algorithms not recognizing persons of color or racist algorithmic predictions of whether offenders will recidivate. We know such unintended ethical consequences must play a role in DRM as well, yet there (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  12.  16
    Innovation, risk and control: The true trend is ‘from tool to purpose’—A discussion on the standardization of AI.Oriana Chaves - forthcoming - AI and Society:1-12.
    In this text, our question is what is the current regulatory trend in countries that are not considered central in the development of artificial intelligence, such as Brazil: a preventive approach, or an experimental approach? We will analyze the bills (PL) that are being processed in legislative houses at the state level, and at the federal level, highlighting some elements, such as: Delimitation of the object (conceptualization), fundamental principles, ethical guidelines, relationship with human work, human supervision, and guidelines for public (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13.  20
    Evaluating approaches for reducing catastrophic risks from AI.Leonard Dung - 2024 - AI and Ethics.
    According to a growing number of researchers, AI may pose catastrophic – or even existential – risks to humanity. Catastrophic risks may be taken to be risks of 100 million human deaths, or a similarly bad outcome. I argue that such risks – while contested – are sufficiently likely to demand rigorous discussion of potential societal responses. Subsequently, I propose four desiderata for approaches to the reduction of catastrophic risks from AI. The quality of such approaches can be assessed by (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  14. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   184 citations  
  15.  8
    Identify and Assess Hydropower Project’s Multidimensional Social Impacts with Rough Set and Projection Pursuit Model.Hui An, Wenjing Yang, Jin Huang, Ai Huang, Zhongchi Wan & Min An - 2020 - Complexity 2020:1-16.
    To realize the coordinated and sustainable development of hydropower projects and regional society, comprehensively evaluating hydropower projects’ influence is critical. Usually, hydropower project development has an impact on environmental geology and social and regional cultural development. Based on comprehensive consideration of complicated geological conditions, fragile ecological environment, resettlement of reservoir area, and other factors of future hydropower development in each country, we have constructed a comprehensive evaluation index system of hydropower projects, including 4 first-level indicators of social economy, environment, safety, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  17.  12
    AI and suicide risk prediction: Facebook live and its aftermath.Dolores Peralta - forthcoming - AI and Society:1-13.
    As suicide rates increase worldwide, the mental health industry has reached an impasse in attempts to assess patients, predict risk, and prevent suicide. Traditional assessment tools are no more accurate than chance, prompting the need to explore new avenues in artificial intelligence (AI). Early studies into these tools show potential with higher accuracy rates than previous methods alone. Medical researchers, computer scientists, and social media companies are exploring these avenues. While Facebook leads the pack, its efforts stem from scrutiny (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18. How to deal with risks of AI suffering.Leonard Dung - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  25
    The Emotional Risk Posed by AI (Artificial Intelligence) in the Workplace.Maria Danielsen - 2023 - Norsk Filosofisk Tidsskrift 58 (2-3):106-117.
    The existential risk posed by ubiquitous artificial intelligence (AI) is a subject of frequent discussion with descriptions of the prospect of misuse, the fear of mass destruction, and the singularity. In this paper I address an under-explored category of existential risk posed by AI, namely emotional risk. Values are a main source of emotions. By challenging some of our most essential values, AI systems are therefore likely to expose us to emotional risks such as loss of care (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Risks of artificial intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated (...)
  21.  26
    Three lines of defense against risks from AI.Jonas Schuett - forthcoming - AI and Society:1-15.
    Organizations that develop and deploy artificial intelligence (AI) systems need to manage the associated risks—for economic, legal, and ethical reasons. However, it is not always clear who is responsible for AI risk management. The three lines of defense (3LoD) model, which is considered best practice in many industries, might offer a solution. It is a risk management framework that helps organizations to assign and coordinate risk management roles and responsibilities. In this article, I suggest ways in which (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed in (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  23.  28
    Welcome to the Machine: AI, Existential Risk, and the Iron Cage of Modernity.Jay A. Gupta - 2023 - Telos: Critical Theory of the Contemporary 2023 (203):163-169.
    ExcerptRecent advances in the functional power of artificial intelligence (AI) have prompted an urgent warning from industry leaders and researchers concerning its “profound risks to society and humanity.”1 Their open letter is admirable not only for its succinct identification of said risks, which include the mass dissemination of misinformation, loss of jobs, and even the possible extinction of our species, but also for its clear normative framing of the problem: “Should we let machines flood our information channels with propaganda and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  21
    AI research ethics is in its infancy: the EU’s AI Act can make it a grown-up.Anaïs Resseguier & Fabienne Ufert - 2024 - Research Ethics 20 (2):143-155.
    As the artificial intelligence (AI) ethics field is currently working towards its operationalisation, ethics review as carried out by research ethics committees (RECs) constitutes a powerful, but so far underdeveloped, framework to make AI ethics effective in practice at the research level. This article contributes to the elaboration of research ethics frameworks for research projects developing and/or using AI. It highlights that these frameworks are still in their infancy and in need of a structure and criteria to ensure AI research (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25.  20
    Ethics in Online AI-Based Systems: Risks and Opportunities in Current Technological Trends.Joan Casas-Roma, Santi Caballe & Jordi Conesa (eds.) - 2024 - Academic Press.
    Recent technological advancements have deeply transformed society and the way people interact with each other. Instantaneous communication platforms have allowed connections with other people, forming global communities, and creating unprecedented opportunities in many sectors, making access to online resources more ubiquitous by reducing limitations imposed by geographical distance and temporal constrains. These technological developments bear ethically relevant consequences with their deployment, and legislations often lag behind such advancements. Because the appearance and deployment of these technologies happen much faster than legislative (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  26. All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot relationships. Section (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  28.  55
    When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Human Autonomy at Risk? An Analysis of the Challenges from AI.Carina Prunkl - 2024 - Minds and Machines 34 (3):1-21.
    Autonomy is a core value that is deeply entrenched in the moral, legal, and political practices of many societies. The development and deployment of artificial intelligence (AI) have raised new questions about AI’s impacts on human autonomy. However, systematic assessments of these impacts are still rare and often held on a case-by-case basis. In this article, I provide a conceptual framework that both ties together seemingly disjoint issues about human autonomy, as well as highlights differences between them. In the first (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions.Nadisha-Marie Aliman, Leon Kester & Roman Yampolskiy - 2021 - Philosophies 6 (1):6.
    In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently _transdisciplinary_ AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing _concrete practical examples_. Distinguishing between unintentionally and intentionally triggered AI (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32.  36
    The Blueprint for an AI Bill of Rights: In Search of Enaction, at Risk of Inaction.Emmie Hine & Luciano Floridi - 2023 - Minds and Machines 33 (2):285-292.
    The US is promoting a new vision of a “Good AI Society” through its recent AI Bill of Rights. This offers a promising vision of community-oriented equity unique amongst peer countries. However, it leaves the door open for potential rights violations. Furthermore, it may have some federal impact, but it is non-binding, and without concrete legislation, the private sector is likely to ignore it.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33.  42
    No we shouldn’t be afraid of medical AI; it involves risks and opportunities.Rosalind J. McDougall - 2019 - Journal of Medical Ethics 45 (8):559-559.
    In contrast to Di Nucci’s characterisation, my argument is not a technoapocalyptic one. The view I put forward is that systems like IBM’s Watson for Oncology create both risks and opportunities from the perspective of shared decision-making. In this response, I address the issues that Di Nucci raises and highlight the importance of bioethicists engaging critically with these developing technologies.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  34. Equity, autonomy, and the ethical risks and opportunities of generalist medical AI.Reuben Sass - 2023 - AI and Ethics:1-11.
    This paper considers the ethical risks and opportunities presented by generalist medical artificial intelligence (GMAI), a kind of dynamic, multimodal AI proposed by Moor et al. (2023) for use in health care. The research objective is to apply widely accepted principles of biomedical ethics to analyze the possible consequences of GMAI, while emphasizing the distinctions between GMAI and current-generation, task-specific medical AI. The principles of autonomy and health equity in particular provide useful guidance for the ethical risks and opportunities of (...)
     
    Export citation  
     
    Bookmark  
  35.  30
    Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI.Shaul A. Duke - 2022 - Ethics and Information Technology 24 (1).
    Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Ethical considerations in Risk management of autonomous and intelligent systems.Anetta Jedličková - 2024 - Ethics and Bioethics (in Central Europe) 14 (1-2):80-95.
    The rapid development of Artificial Intelligence (AI) has raised concerns regarding the potential risks it may pose to humans, society, and the environment. Recent advancements have intensified these concerns, emphasizing the need for a deeper understanding of the technical, societal, and ethical aspects that could lead to adverse or harmful failures in decisions made by autonomous and intelligent systems (AIS). This paper aims to examine the ethical dimensions of risk management in AIS. Its objective is to highlight the significance (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  20
    Opening the black boxes of the black carpet in the era of risk society: a sociological analysis of AI, algorithms and big data at work through the case study of the Greek postal services.Christos Kouroutzas & Venetia Palamari - forthcoming - AI and Society:1-14.
    This article draws on contributions from the Sociology of Science and Technology and Science and Technology Studies, the Sociology of Risk and Uncertainty, and the Sociology of Work, focusing on the transformations of employment regarding expanded automation, robotization and informatization. The new work patterns emerging due to the introduction of software and hardware technologies, which are based on artificial intelligence, algorithms, big data gathering and robotic systems are examined closely. This article attempts to “open the black boxes” of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. In AI we trust? Perceptions about automated decision-making by artificial intelligence.Theo Araujo, Natali Helberger, Sanne Kruikemeier & Claes H. de Vreese - 2020 - AI and Society 35 (3):611-623.
    Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, and judicial (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  39.  49
    AI and Phronesis.Dan Feldman & Nir Eisikovits - 2022 - Moral Philosophy and Politics 9 (2):181-199.
    We argue that the growing prevalence of statistical machine learning in everyday decision making – from creditworthiness to police force allocation – effectively replaces many of our humdrum practical judgments and that this will eventually undermine our capacity for making such judgments. We lean on Aristotle’s famous account of how phronesis and moral virtues develop to make our case. If Aristotle is right that the habitual exercise of practical judgment allows us to incrementally hone virtues, and if AI saves us (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  23
    AI knows best? Avoiding the traps of paternalism and other pitfalls of AI-based patient preference prediction.Andrea Ferrario, Sophie Gloeckler & Nikola Biller-Andorno - 2023 - Journal of Medical Ethics 49 (3):185-186.
    In our recent article ‘The Ethics of the Algorithmic Prediction of Goal of Care Preferences: From Theory to Practice’1, we aimed to ignite a critical discussion on why and how to design artificial intelligence (AI) systems assisting clinicians and next-of-kin by predicting goal of care preferences for incapacitated patients. Here, we would like to thank the commentators for their valuable responses to our work. We identified three core themes in their commentaries: (1) the risks of AI paternalism, (2) worries about (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  41. Clinical Decisions Using AI Must Consider Patient Values.Jonathan Birch, Kathleen A. Creel, Abhinav K. Jha & Anya Plutynski - 2022 - Nature Medicine 28:229–232.
    Built-in decision thresholds for AI diagnostics are ethically problematic, as patients may differ in their attitudes about the risk of false-positive and false-negative results, which will require that clinicians assess patient values.
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  42. New developments in the philosophy of AI.Vincent C. Müller - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  43.  14
    Testing the Black Box: Institutional Investors, Risk Disclosure, and Ethical AI.Trooper Sanders - 2020 - Philosophy and Technology 34 (1):105-109.
    The integration of artificial intelligence throughout the economy makes the ethical risks it poses a mainstream concern beyond technology circles. Building on their growing role bringing greater transparency to climate risk, institutional investors can play a constructive role in advancing the responsible evolution of AI by demanding more rigorous analysis and disclosure of ethical risks.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44.  11
    Contestations in urban mobility: rights, risks, and responsibilities for Urban AI.Nitin Sawhney - 2023 - AI and Society 38 (3):1083-1098.
    Cities today are dynamic urban ecosystems with evolving physical, socio-cultural, and technological infrastructures. Many contestations arise from the effects of inequitable access and intersecting crises currently faced by cities, which may be amplified by the algorithmic and data-centric infrastructures being introduced in urban contexts. In this article, I argue for a critical lens into how inter-related urban technologies, big data and policies, constituted as Urban AI, offer both challenges and opportunities. I examine scenarios of contestations in _urban mobility_, defined broadly (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45.  52
    Against AI-improved Personal Memory.Björn Lundgren - 2020 - In Aging between Participation and Simulation. pp. 223–234.
    In 2017, Tom Gruber held a TED talk, in which he presented a vision of improving and enhancing humanity with AI technology. Specifically, Gruber suggested that an AI-improved personal memory (APM) would benefit people by improving their “mental gain”, making us more creative, improving our “social grace”, enabling us to do “science on our own data about what makes us feel good and stay healthy”, and, for people suffering from dementia, it “could make a difference between a life of isolation (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  46.  21
    Exploring Factors of the Willingness to Accept AI-Assisted Learning Environments: An Empirical Investigation Based on the UTAUT Model and Perceived Risk Theory.Wentao Wu, Ben Zhang, Shuting Li & Hehai Liu - 2022 - Frontiers in Psychology 13.
    Artificial intelligence technology has been widely applied in many fields. AI-assisted learning environments have been implemented in classrooms to facilitate the innovation of pedagogical models. However, college students' willingness to accept AI-assisted learning environments has been ignored. Exploring the factors that influence college students' willingness to use AI can promote AI technology application in higher education. Based on the Unified Theory of Acceptance and Use of Technology and the theory of perceived risk, this study identified six factors that influence (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  48.  86
    AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  49.  98
    AI ethics should not remain toothless! A call to bring back the teeth of ethics.Rowena Rodrigues & Anaïs Rességuier - 2020 - Big Data and Society 7 (2).
    Ethics has powerful teeth, but these are barely being used in the ethics of AI today – it is no wonder the ethics of AI is then blamed for having no teeth. This article argues that ‘ethics’ in the current AI ethics field is largely ineffective, trapped in an ‘ethical principles’ approach and as such particularly prone to manipulation, especially by industry actors. Using ethics as a substitute for law risks its abuse and misuse. This significantly limits what ethics can (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   29 citations  
  50. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 997