Results for 'AI Act'

996 found
Order:
  1. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with a framework developed by (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2.  21
    AI research ethics is in its infancy: the EU’s AI Act can make it a grown-up.Anaïs Resseguier & Fabienne Ufert - 2024 - Research Ethics 20 (2):143-155.
    As the artificial intelligence (AI) ethics field is currently working towards its operationalisation, ethics review as carried out by research ethics committees (RECs) constitutes a powerful, but so far underdeveloped, framework to make AI ethics effective in practice at the research level. This article contributes to the elaboration of research ethics frameworks for research projects developing and/or using AI. It highlights that these frameworks are still in their infancy and in need of a structure and criteria to ensure AI research (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  4. A Robust Governance for the AI Act: AI Office, AI Board, Scientific Panel, and National Authorities.Claudio Novelli, Philipp Hacker, Jessica Morley, Jarle Trondal & Luciano Floridi - manuscript
    Regulation is nothing without enforcement. This particularly holds for the dynamic field of emerging technologies. Hence, this article has two ambitions. First, it explains how the EU´s new Artificial Intelligence Act (AIA) will be implemented and enforced by various institutional bodies, thus clarifying the governance framework of the AIA. Second, it proposes a normative model of governance, providing recommendations to ensure uniform and coordinated execution of the AIA and the fulfilment of the legislation. Taken together, the article explores how the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  5.  21
    Use case cards: a use case reporting framework inspired by the European AI Act.Emilia Gómez, Sandra Baldassarri, David Fernández-Llorca & Isabelle Hupont - 2024 - Ethics and Information Technology 26 (2):1-23.
    Despite recent efforts by the Artificial Intelligence (AI) community to move towards standardised procedures for documenting models, methods, systems or datasets, there is currently no methodology focused on use cases aligned with the risk-based approach of the European AI Act (AI Act). In this paper, we propose a new framework for the documentation of use cases that we call use case cards, based on the use case modelling included in the Unified Markup Language (UML) standard. Unlike other documentation methodologies, we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  41
    Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act.Johann Laux - forthcoming - AI and Society:1-14.
    Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three contributions. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7.  8
    AI-informed acting: an Arendtian perspective.Daniil Koloskov - forthcoming - Phenomenology and the Cognitive Sciences:1-18.
    In this paper, I will investigate the possible impact of weak artificial intelligence (more specifically, I will concentrate on deep learning) on human capability of action. For this goal, I will first address Arendt’s philosophy of action, which seeks to emphasize the distinguishing elements of action that set it apart from other forms of human activity. According to Arendt, action should be conceived as _praxis_, an activity that has its goal in its own very performance. The authentic meaning of action (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8.  79
    AI Assertion.Patrick Butlin & Emanuel Viebahn - forthcoming - Ergo: An Open Access Journal of Philosophy.
    Modern generative AI systems have shown the capacity to produce remarkably fluent language, prompting debates both about their semantic understanding and, less prominently, about whether they can perform speech acts. This paper addresses the latter question, focusing on assertion. We argue that to be capable of assertion, an entity must meet two requirements: it must produce outputs with descriptive functions, and it must be capable of being sanctioned by agents with which it interacts. The second requirement arises from the nature (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  9.  4
    The EU Artificial Intelligence Act: Regulating Subliminal AI Systems The EU Artificial Intelligence Act: Regulating Subliminal AI Systems, by Rostam J. Neuwirth, London, Routledge, 2023, xiii + 129 pp., £48.99 (cloth). [REVIEW]Zhonghua Wu & Le Cheng - 2024 - The European Legacy 29 (3-4):431-433.
    With the rapid advances in science and technology, Artificial Intelligence (AI) has been developing exponentially and transforming the world in ways we could never have envisioned. Its applications...
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this argument (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  11. AI and the expert; a blueprint for the ethical use of opaque AI.Amber Ross - forthcoming - AI and Society:1-12.
    The increasing demand for transparency in AI has recently come under scrutiny. The question is often posted in terms of “epistemic double standards”, and whether the standards for transparency in AI ought to be higher than, or equivalent to, our standards for ordinary human reasoners. I agree that the push for increased transparency in AI deserves closer examination, and that comparing these standards to our standards of transparency for other opaque systems is an appropriate starting point. I suggest that a (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  12. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - manuscript
    The advent of Generative AI, particularly through Large Language Models (LLMs) like ChatGPT and its successors, marks a paradigm shift in the AI landscape. Advanced LLMs exhibit multimodality, handling diverse data formats, thereby broadening their application scope. However, the complexity and emergent autonomy of these models introduce challenges in predictability and legal compliance. This paper analyses the legal and regulatory implications of Generative AI and LLMs in the European Union context, focusing on liability, privacy, intellectual property, and cybersecurity. It examines (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  13.  52
    AI support for ethical decision-making around resuscitation: proceed with care.Nikola Biller-Andorno, Andrea Ferrario, Susanne Joebges, Tanja Krones, Federico Massini, Phyllis Barth, Georgios Arampatzis & Michael Krauthammer - 2022 - Journal of Medical Ethics 48 (3):175-183.
    Artificial intelligence (AI) systems are increasingly being used in healthcare, thanks to the high level of performance that these systems have proven to deliver. So far, clinical applications have focused on diagnosis and on prediction of outcomes. It is less clear in what way AI can or should support complex clinical decisions that crucially depend on patient preferences. In this paper, we focus on the ethical questions arising from the design, development and deployment of AI systems to support decision-making around (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  14. What is a subliminal technique? An ethical perspective on AI-driven influence.Juan Pablo Bermúdez, Rune Nyrup, Sebastian Deterding, Celine Mougenot, Laura Moradbakhti, Fangzhou You & Rafael A. Calvo - 2023 - Ieee Ethics-2023 Conference Proceedings.
    Concerns about threats to human autonomy feature prominently in the field of AI ethics. One aspect of this concern relates to the use of AI systems for problematically manipulative influence. In response to this, the European Union’s draft AI Act (AIA) includes a prohibition on AI systems deploying subliminal techniques that alter people’s behavior in ways that are reasonably likely to cause harm (Article 5(1)(a)). Critics have argued that the term ‘subliminal techniques’ is too narrow to capture the target cases (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  3
    AI-Related Risk: An Epistemological Approach.Giacomo Zanotti, Daniele Chiffi & Viola Schiaffonati - 2024 - Philosophy and Technology 37 (2):1-18.
    Risks connected with AI systems have become a recurrent topic in public and academic debates, and the European proposal for the AI Act explicitly adopts a risk-based tiered approach that associates different levels of regulation with different levels of risk. However, a comprehensive and general framework to think about AI-related risk is still lacking. In this work, we aim to provide an epistemological analysis of such risk building upon the existing literature on disaster risk analysis and reduction. We show how (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16.  86
    AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective.Francesca Lagioia & Giovanni Sartor - 2020 - Philosophy and Technology 33 (3):433-465.
    Criminal liability for acts committed by AI systems has recently become a hot legal topic. This paper includes three different contributions. The first contribution is an analysis of the extent to which an AI system can satisfy the requirements for criminal liability: accomplishing an actus reus, having the corresponding mens rea, possessing the cognitive capacities needed for responsibility. The second contribution is a discussion of criminal activity accomplished by an AI entity, with reference to a recent case involving an online (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  17. Sinful AI?Michael Wilby - 2023 - In Critical Muslim, 47. London: Hurst Publishers. pp. 91-108.
    Could the concept of 'evil' apply to AI? Drawing on PF Strawson's framework of reactive attitudes, this paper argues that we can understand evil as involving agents who are neither fully inside nor fully outside our moral practices. It involves agents whose abilities and capacities are enough to make them morally responsible for their actions, but whose behaviour is far enough outside of the norms of our moral practices to be labelled 'evil'. Understood as such, the paper argues that, when (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. Friendly Superintelligent AI: All You Need Is Love.Michael Prinzing - 2017 - In Vincent C. Müller (ed.), Philosophy and theory of artificial intelligence 2017. Berlin: Springer.
    There is a non-trivial chance that sometime in the future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become “superintelligent”, vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure—long before one arrives—that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in part because most (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  19.  1
    Can and Should Language Models Act Politically? Hannah Arendt’s Theory of Action in Comparison with Generative AI.Lukas Ohly - 2024 - Filozofia 79 (5):501-513.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  49
    AI assisted ethics.Amitai Etzioni & Oren Etzioni - 2016 - Ethics and Information Technology 18 (2):149-156.
    The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument face is how to ensure that these instruments will not engage in unethical conduct. The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  21.  23
    Promoting responsible AI : A European perspective on the governance of artificial intelligence in media and journalism.Colin Porlezza - 2023 - Communications 48 (3):370-394.
    Artificial intelligence and automation have become pervasive in news media, influencing journalism from news gathering to news distribution. As algorithms are increasingly determining editorial decisions, specific concerns have been raised with regard to the responsible and accountable use of AI-driven tools by news media, encompassing new regulatory and ethical questions. This contribution aims to analyze whether and to what extent the use of AI technology in news media and journalism is currently regulated and debated within the European Union and the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22.  38
    Will AI avoid exploitation? Artificial general intelligence and expected utility theory.Adam Bales - forthcoming - Philosophical Studies:1-20.
    A simple argument suggests that we can fruitfully model advanced AI systems using expected utility theory. According to this argument, an agent will need to act as if maximising expected utility if they’re to avoid exploitation. Insofar as we should expect advanced AI to avoid exploitation, it follows that we should expected advanced AI to act as if maximising expected utility. I spell out this argument more carefully and demonstrate that it fails, but show that the manner of its failure (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  23.  41
    Adopting AI: how familiarity breeds both trust and contempt.Michael C. Horowitz, Lauren Kahn, Julia Macdonald & Jacquelyn Schneider - forthcoming - AI and Society:1-15.
    Despite pronouncements about the inevitable diffusion of artificial intelligence and autonomous technologies, in practice, it is human behavior, not technology in a vacuum, that dictates how technology seeps into—and changes—societies. To better understand how human preferences shape technological adoption and the spread of AI-enabled autonomous technologies, we look at representative adult samples of US public opinion in 2018 and 2020 on the use of four types of autonomous technologies: vehicles, surgery, weapons, and cyber defense. By focusing on these four diverse (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  22
    AI, Suicide Prevention and the Limits of Beneficence.Bert Heinrichs & Aurélie Halsband - 2022 - Philosophy and Technology 35 (4):1-18.
    In this paper, we address the question of whether AI should be used for suicide prevention on social media data. We focus on algorithms that can identify persons with suicidal ideation based on their postings on social media platforms and investigate whether private companies like Facebook are justified in using these. To find out if that is the case, we start with providing two examples for AI-based means of suicide prevention in social media. Subsequently, we frame suicide prevention as an (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25.  14
    AI Chatbots and Challenges of HIPAA Compliance for AI Developers and Vendors.Delaram Rezaeikhonakdar - 2023 - Journal of Law, Medicine and Ethics 51 (4):988-995.
    Developers and vendors of large language models (“LLMs”) — such as ChatGPT, Google Bard, and Microsoft’s Bing at the forefront—can be subject to Health Insurance Portability and Accountability Act of 1996 (“HIPAA”) when they process protected health information (“PHI”) on behalf of the HIPAA covered entities. In doing so, they become business associates or subcontractors of a business associate under HIPAA.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. Speech Act Theory and Ethics of Speech Processing as Distinct Stages: the ethics of collecting, contextualizing and the releasing of (speech) data.Jolly Thomas, Lalaram Arya, Mubarak Hussain & Prasanna Srm - 2023 - 2023 Ieee International Symposium on Ethics in Engineering, Science, and Technology (Ethics), West Lafayette, in, Usa.
    Using speech act theory from the Philosophy of Language, this paper attempts to develop an ethical framework for the phenomenon of speech processing. We use the concepts of the illocutionary force and the illocutionary content of a speech act to explain the ethics of speech processing. By emphasizing the different stages involved in speech processing, we explore the distinct ethical issues that arise in relation to each stage. Input, processing, and output are the different ethically relevant stages under which a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27.  12
    Social Acts in Digital Environments.Andrea Addis, Olimpia G. Loddo & Massimiliano Saba - 2021 - Phenomenology and Mind 20:64-75.
    Adolf Reinach’s theory of social acts and Czesław Znamierowski theory of the environment can show a new perspective of analysis in the fields of computer science and digital communication. This paper will begin analysing the performance of social acts in two categories of digital environments: (i) fictional digital environment and (ii) real digital environment. The analysis will be supported by examples from the history of computer science. In both kinds of digital environments, organigrams play a significant role and depend on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  28. Dubito Ergo Sum: Exploring AI Ethics.Viktor Dörfler & Giles Cuthbert - 2024 - Hicss 57: Hawaii International Conference on System Sciences, Honolulu, Hi.
    We paraphrase Descartes’ famous dictum in the area of AI ethics where the “I doubt and therefore I am” is suggested as a necessary aspect of morality. Therefore AI, which cannot doubt itself, cannot possess moral agency. Of course, this is not the end of the story. We explore various aspects of the human mind that substantially differ from AI, which includes the sensory grounding of our knowing, the act of understanding, and the significance of being able to doubt ourselves. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. Superintelligence AI and Skepticism.Joseph Corabi - 2017 - Journal of Evolution and Technology 27 (1):4-23.
    It has become fashionable to worry about the development of superintelligent AI that results in the destruction of humanity. This worry is not without merit; but it may be overstated. This paper explores some previously undiscussed reasons to be optimistic that; even if superintelligent AI does arise; it will not destroy us. These have to do with the possibility that a superintelligent AI will become mired in skeptical worries that its superintelligence cannot help it to solve. I argue that superintelligent (...)
    No categories
     
    Export citation  
     
    Bookmark  
  30.  96
    Can we Bridge AI’s responsibility gap at Will?Maximilian Kiener - 2022 - Ethical Theory and Moral Practice 25 (4):575-593.
    Artificial intelligence increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  31.  5
    Ai Development and the ‘Fuzzy Logic' of Chinese Cyber Security and Data Laws.Max Parasol - 2021 - Cambridge University Press.
    The book examines the extent to which Chinese cyber and network security laws and policies act as a constraint on the emergence of Chinese entrepreneurialism and innovation. Specifically, how the contradictions and tensions between data localisation laws affect innovation in artificial intelligence. The book surveys the globalised R&D networks, and how the increasing use of open-source platforms by leading Chinese AI firms during 2017–2020, exacerbated the apparent contradiction between Network Sovereignty and Chinese innovation. The drafting of the Cyber Security Law (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  32.  11
    The Moral Status of AI Entities.Joan Llorca Albareda, Paloma García & Francisco Lara - 2023 - In Francisco Lara & Jan Deckers (eds.), Ethics of Artificial Intelligence. Springer Nature Switzerland. pp. 59-83.
    The emergence of AI is posing serious challenges to standard conceptions of moral status. New non-biological entities are able to act and make decisions rationally. The question arises, in this regard, as to whether AI systems possess or can possess the necessary properties to be morally considerable. In this chapter, we have undertaken a systematic analysis of the various debates that are taking place about the moral status of AI. First, we have discussed the possibility that AI systems, by virtue (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Maximizing team synergy in AI-related interdisciplinary groups: an interdisciplinary-by-design iterative methodology.Piercosma Bisconti, Davide Orsitto, Federica Fedorczyk, Fabio Brau, Marianna Capasso, Lorenzo De Marinis, Hüseyin Eken, Federica Merenda, Mirko Forti, Marco Pacini & Claudia Schettini - 2022 - AI and Society 1 (1):1-10.
    In this paper, we propose a methodology to maximize the benefits of interdisciplinary cooperation in AI research groups. Firstly, we build the case for the importance of interdisciplinarity in research groups as the best means to tackle the social implications brought about by AI systems, against the backdrop of the EU Commission proposal for an Artificial Intelligence Act. As we are an interdisciplinary group, we address the multi-faceted implications of the mass-scale diffusion of AI-driven technologies. The result of our exercise (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Friendly Superintelligent AI: All You Need is Love.Michael Prinzing - 2012 - In Vincent C. Müller (ed.), The Philosophy & Theory of Artificial Intelligence. Springer. pp. 288-301.
    There is a non-trivial chance that sometime in the (perhaps somewhat distant) future, someone will build an artificial general intelligence that will surpass human-level cognitive proficiency and go on to become "superintelligent", vastly outperforming humans. The advent of superintelligent AI has great potential, for good or ill. It is therefore imperative that we find a way to ensure-long before one arrives-that any superintelligence we build will consistently act in ways congenial to our interests. This is a very difficult challenge in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  39
    How to teach responsible AI in Higher Education: challenges and opportunities.Andrea Aler Tubella, Marçal Mora-Cantallops & Juan Carlos Nieves - 2023 - Ethics and Information Technology 26 (1):1-14.
    In recent years, the European Union has advanced towards responsible and sustainable Artificial Intelligence (AI) research, development and innovation. While the Ethics Guidelines for Trustworthy AI released in 2019 and the AI Act in 2021 set the starting point for a European Ethical AI, there are still several challenges to translate such advances into the public debate, education and practical learning. This paper contributes towards closing this gap by reviewing the approaches that can be found in the existing literature and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36. The European legislation on AI: a brief analysis of its philosophical approach.Luciano Floridi - 2021 - Philosophy and Technology 34 (2):215–⁠222.
    On 21 April 2021, the European Commission published the proposal of the new EU Artificial Intelligence Act (AIA) — one of the most influential steps taken so far to regulate AI internationally. This article highlights some foundational aspects of the Act and analyses the philosophy behind its proposal.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  37.  12
    Transparency for AI systems: a value-based approach.Stefan Buijsman - 2024 - Ethics and Information Technology 26 (2):1-11.
    With the widespread use of artificial intelligence, it becomes crucial to provide information about these systems and how they are used. Governments aim to disclose their use of algorithms to establish legitimacy and the EU AI Act mandates forms of transparency for all high-risk and limited-risk systems. Yet, what should the standards for transparency be? What information is needed to show to a wide public that a certain system can be used legitimately and responsibly? I argue that process-based approaches fail (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  39.  85
    Ethical Perceptions of AI in Hiring and Organizational Trust: The Role of Performance Expectancy and Social Influence.Maria Figueroa-Armijos, Brent B. Clark & Serge P. da Motta Veiga - 2023 - Journal of Business Ethics 186 (1):179-197.
    The use of artificial intelligence (AI) in hiring entails vast ethical challenges. As such, using an ethical lens to study this phenomenon is to better understand whether and how AI matters in hiring. In this paper, we examine whether ethical perceptions of using AI in the hiring process influence individuals’ trust in the organizations that use it. Building on the organizational trust model and the unified theory of acceptance and use of technology, we explore whether ethical perceptions are shaped by (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  40. Causation in AI and law.Jos Lehmann, Joost Breuker & Bob Brouwer - 2004 - Artificial Intelligence and Law 12 (4):279-315.
    Reasoning about causation in fact is an essential element of attributing legal responsibility. Therefore, the automation of the attribution of legal responsibility requires a modelling effort aimed at the following: a thorough understanding of the relation between the legal concepts of responsibility and of causation in fact; a thorough understanding of the relation between causation in fact and the common sense concept of causation; and, finally, the specification of an ontology of the concepts that are minimally required for (automatic) common (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  41.  10
    Developing safer AI–concepts from economics to the rescue.Pankaj Kumar Maskara - forthcoming - AI and Society:1-13.
    With the rapid advancement of AI, there exists a possibility of rogue human actor(s) taking control of a potent AI system or an AI system redefining its objective function such that it presents an existential threat to mankind or severely curtails its freedom. Therefore, some suggest an outright ban on AI development while others profess international agreement on constraining specific types of AI. These approaches are untenable because countries will continue developing AI for national defense, regardless. Some suggest having an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42.  27
    Lawmaps: enabling legal AI development through visualisation of the implicit structure of legislation and lawyerly process.Scott McLachlan, Evangelia Kyrimi, Kudakwashe Dube, Norman Fenton & Lisa C. Webley - 2023 - Artificial Intelligence and Law 31 (1):169-194.
    Modelling that exploits visual elements and information visualisation are important areas that have contributed immensely to understanding and the computerisation advancements in many domains and yet remain unexplored for the benefit of the law and legal practice. This paper investigates the challenge of modelling and expressing structures and processes in legislation and the law by using visual modelling and information visualisation (InfoVis) to assist accessibility of legal knowledge, practice and knowledge formalisation as a basis for legal AI. The paper uses (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43.  11
    We are Building Gods: AI as the Anthropomorphised Authority of the Past.Carl Öhman - 2024 - Minds and Machines 34 (1):1-18.
    This article argues that large language models (LLMs) should be interpreted as a form of gods. In a theological sense, a god is an immortal being that exists beyond time and space. This is clearly nothing like LLMs. In an anthropological sense, however, a god is rather defined as the personified authority of a group through time—a conceptual tool that molds a collective of ancestors into a unified agent or voice. This is exactly what LLMs are. They are products of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  44.  37
    The Touching Test: AI and the Future of Human Intimacy.Martha J. Reineke - 2022 - Contagion: Journal of Violence, Mimesis, and Culture 29 (1):123-146.
    In lieu of an abstract, here is a brief excerpt of the content:The Touching TestAI and the Future of Human IntimacyMartha J. Reineke (bio)Each Friday, the New York Times publishes Love Letters, a compendium of articles on courtship. A recent story featured Melinda, a real estate agent, and Calvin, a human resources director.1 They had met at a market deli counter. On their first date, a lasagna dinner at Melinda's home, Calvin posed the question, "What are you looking for in (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  45.  94
    A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  46.  9
    Negotiating the authenticity of AI: how the discourse on AI rejects human indeterminacy.Siri Beerends & Ciano Aydin - forthcoming - AI and Society:1-14.
    In this paper, we demonstrate how the language and reasonings that academics, developers, consumers, marketers, and journalists deploy to accept or reject AI as authentic intelligence has far-reaching bearing on how we understand our human intelligence and condition. The discourse on AI is part of what we call the “authenticity negotiation process” through which AI’s “intelligence” is given a particular meaning and value. This has implications for scientific theory, research directions, ethical guidelines, design principles, funding, media attention, and the way (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Streaching the notion of moral responsibility in nanoelectronics by appying AI.Robert Albin & Amos Bardea - 2021 - In Robert Albin & Amos Bardea (eds.), Ethics in Nanotechnology Social Sciences and Philosophical Aspects, Vol. 2. Berlin: De Gruyter. pp. 75-87.
    The development of machine learning and deep learning (DL) in the field of AI (artificial intelligence) is the direct result of the advancement of nano-electronics. Machine learning is a function that provides the system with the capacity to learn from data without being programmed explicitly. It is basically a mathematical and probabilistic model. DL is part of machine learning methods based on artificial neural networks, simply called neural networks (NNs), as they are inspired by the biological NNs that constitute organic (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  48.  24
    Where lies the grail? AI, common sense, and human practical intelligence.William Hasselberger & Micah Lott - forthcoming - Phenomenology and the Cognitive Sciences:1-22.
    The creation of machines with intelligence comparable to human beings—so-called "human-level” and “general” intelligence—is often regarded as the Holy Grail of Artificial Intelligence (AI) research. However, many prominent discussions of AI lean heavily on the notion of human-level intelligence to frame AI research, but then rely on conceptions of human cognitive capacities, including “common sense,” that are sketchy, one-sided, philosophically loaded, and highly contestable. Our goal in this essay is to bring into view some underappreciated features of the practical intelligence (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49. The German Act on Autonomous Driving: Why Ethics Still Matters.Alexander Kriebitz, Raphael Max & Christoph Lütge - 2022 - Philosophy and Technology 35 (2):1-13.
    The German Act on Autonomous Driving constitutes the first national framework on level four autonomous vehicles and has received attention from policy makers, AI ethics scholars and legal experts in autonomous driving. Owing to Germany’s role as a global hub for car manufacturing, the following paper sheds light on the act’s position within the ethical discourse and how it reconfigures the balance between legislation and ethical frameworks. Specifically, in this paper, we highlight areas that need to be more worked out (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  50. Could a Created Being Ever be Creative? Some Philosophical Remarks on Creativity and AI Development.Yasemin J. Erden - 2010 - Minds and Machines 20 (3):349-362.
    Creativity has a special role in enabling humans to develop beyond the fulfilment of simple primary functions. This factor is significant for Artificial Intelligence (AI) developers who take replication to be the primary goal, since moves toward creating autonomous artificial-beings beg questions about their potential for creativity. Using Wittgenstein’s remarks on rule-following and language-games, I argue that although some AI programs appear creative, to call these programmed acts creative in our terms is to misunderstand the use of this word in (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 996