40 found

View year:

  1.  8
    Technology and pronouns: disrupting the ‘Natural Attitude about Gender’.Maren Behrensen - 2024 - Ethics and Information Technology 26 (3):1-10.
    I consider how video conferencing platforms have changed practices of pronoun sharing, how this development fits into recent philosophical work on conceptual and social disruption, and how it might be an effective tool to disrupt the ‘natural attitude about gender’.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2.  20
    Negotiating becoming: a Nietzschean critique of large language models.Simon W. S. Fischer & Bas de Boer - 2024 - Ethics and Information Technology 26 (3):1-12.
    Large language models (LLMs) structure the linguistic landscape by reflecting certain beliefs and assumptions. In this paper, we address the risk of people unthinkingly adopting and being determined by the values or worldviews embedded in LLMs. We provide a Nietzschean critique of LLMs and, based on the concept of will to power, consider LLMs as will-to-power organisations. This allows us to conceptualise the interaction between self and LLMs as power struggles, which we understand as negotiation. Currently, the invisibility and incomprehensibility (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. A phenomenology and epistemology of large language models: transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - 2024 - Ethics and Information Technology 26 (3):1-15.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative artificial intelligence (AI) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4.  22
    Correction: ChatGPT is bullshit.Michael Townsen Hicks, James Humphries & Joe Slater - 2024 - Ethics and Information Technology 26 (3):1-2.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  35
    Now you see me, now you don’t: an exploration of religious exnomination in DALL-E.Mark Alfano, Ehsan Abedin, Ritsaart Reimann, Marinus Ferreira & Marc Cheong - 2024 - Ethics and Information Technology 26 (2):1-13.
    Artificial intelligence (AI) systems are increasingly being used not only to classify and analyze but also to generate images and text. As recent work on the content produced by text and image Generative AIs has shown (e.g., Cheong et al., 2024, Acerbi & Stubbersfield, 2023), there is a risk that harms of representation and bias, already documented in prior AI and natural language processing (NLP) algorithms may also be present in generative models. These harms relate to protected categories such as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  23
    Policy advice and best practices on bias and fairness in AI.Jose M. Alvarez, Alejandra Bringas Colmenarejo, Alaa Elobaid, Simone Fabbrizzi, Miriam Fahimi, Antonio Ferrara, Siamak Ghodsi, Carlos Mougan, Ioanna Papageorgiou, Paula Reyero, Mayra Russo, Kristen M. Scott, Laura State, Xuan Zhao & Salvatore Ruggieri - 2024 - Ethics and Information Technology 26 (2):1-26.
    The literature addressing bias and fairness in AI models (fair-AI) is growing at a fast pace, making it difficult for novel researchers and practitioners to have a bird’s-eye view picture of the field. In particular, many policy initiatives, standards, and best practices in fair-AI have been proposed for setting principles, procedures, and knowledge bases to guide and operationalize the management of bias and fairness. The first objective of this paper is to concisely survey the state-of-the-art of fair-AI methods and resources, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  36
    Tailoring responsible research and innovation to the translational context: the case of AI-supported exergaming.Sabrina Blank, Celeste Mason, Frank Steinicke & Christian Herzog - 2024 - Ethics and Information Technology 26 (2):1-16.
    We discuss the implementation of Responsible Research and Innovation (RRI) within a project for the development of an AI-supported exergame for assisted movement training, outline outcomes and reflect on methodological opportunities and limitations. We adopted the responsibility-by-design (RbD) standard (CEN CWA 17796:2021) supplemented by methods for collaborative, ethical reflection to foster and support a shift towards a culture of trustworthiness inherent to the entire development process. An embedded ethicist organised the procedure to instantiate a collaborative learning effort and implement RRI (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  8.  33
    Getting it right: the limits of fine-tuning large language models.Jacob Browning - 2024 - Ethics and Information Technology 26 (2):1-9.
    The surge in interest in natural language processing in artificial intelligence has led to an explosion of new language models capable of engaging in plausible language use. But ensuring these language models produce honest, helpful, and inoffensive outputs has proved difficult. In this paper, I argue problems of inappropriate content in current, autoregressive language models—such as ChatGPT and Gemini—are inescapable; merely predicting the next word is incompatible with reliably providing appropriate outputs. The various fine-tuning methods, while helpful, cannot transform the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  28
    Transparency for AI systems: a value-based approach.Stefan Buijsman - 2024 - Ethics and Information Technology 26 (2):1-11.
    With the widespread use of artificial intelligence, it becomes crucial to provide information about these systems and how they are used. Governments aim to disclose their use of algorithms to establish legitimacy and the EU AI Act mandates forms of transparency for all high-risk and limited-risk systems. Yet, what should the standards for transparency be? What information is needed to show to a wide public that a certain system can be used legitimately and responsibly? I argue that process-based approaches fail (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  31
    All too real metacapitalism: towards a non-dualist political ontology of metaverse.Mark Coeckelbergh - 2024 - Ethics and Information Technology 26 (2):1-9.
    Current techno-utopian visions of metaverse raise ontological, ethical, and political questions. Drawing on existing literature on virtual worlds but also philosophically moving beyond that body of work and responding to political contexts concerning identity, capitalism, and climate, this paper begins to address these questions by offering a conceptual framework to think about the ontology of metaverse(s) in ways that see metaverse as real, experienced and shaping our experience, technologically constituted, and political. It shows how this non-dualist political-ontological approach helps to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  42
    Socializing the political: rethinking filter bubbles and social media with Hannah Arendt.Zachary Daus - 2024 - Ethics and Information Technology 26 (2):1-10.
    It is often claimed that social media accelerate political extremism by employing personalization algorithms that filter users into groups with homogenous beliefs. While an intuitive position, recent research has shown that social media users exhibit self-filtering tendencies. In this paper, I apply Hannah Arendt’s theory of political judgment to hypothesize a cause for self-filtering on social media. According to Arendt, a crucial step in political judgment is the imagination of a general standpoint of distinct yet equal perspectives, against which individuals (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  32
    Detecting your depression with your smartphone? – An ethical analysis of epistemic injustice in passive self-tracking apps.Mirjam Faissner, Eva Kuhn, Regina Müller & Sebastian Laacke - 2024 - Ethics and Information Technology 26 (2):1-14.
    Smartphone apps might offer a low-threshold approach to the detection of mental health conditions, such as depression. Based on the gathering of ‘passive data,’ some apps generate a user’s ‘digital phenotype,’ compare it to those of users with clinically confirmed depression and issue a warning if a depressive episode is likely. These apps can, thus, serve as epistemic tools for affected users. From an ethical perspective, it is crucial to consider epistemic injustice to promote socially responsible innovations within digital mental (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  36
    Use case cards: a use case reporting framework inspired by the European AI Act.Emilia Gómez, Sandra Baldassarri, David Fernández-Llorca & Isabelle Hupont - 2024 - Ethics and Information Technology 26 (2):1-23.
    Despite recent efforts by the Artificial Intelligence (AI) community to move towards standardised procedures for documenting models, methods, systems or datasets, there is currently no methodology focused on use cases aligned with the risk-based approach of the European AI Act (AI Act). In this paper, we propose a new framework for the documentation of use cases that we call use case cards, based on the use case modelling included in the Unified Markup Language (UML) standard. Unlike other documentation methodologies, we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. ChatGPT is bullshit.Michael Townsen Hicks, James Humphries & Joe Slater - 2024 - Ethics and Information Technology 26 (2):1-10.
    Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  29
    Fiduciary requirements for virtual assistants.Leonie Koessler - 2024 - Ethics and Information Technology 26 (2):1-18.
    Virtual assistants (VAs), like Amazon’s Alexa, Google’s Assistant, and Apple’s Siri, are on the rise. However, despite allegedly being ‘assistants’ to users, they ultimately help firms to maximise profits. With more and more tasks and leeway bestowed upon VAs, the severity as well as the extent of conflicts of interest between firms and users increase. This article builds on the common law field of fiduciary law to argue why and how regulators should address this phenomenon. First, the functions of VAs (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16.  22
    Can we solve the Gamer’s Dilemma by resisting it?Morgan Luck - 2024 - Ethics and Information Technology 26 (2):1-8.
    The Gamer’s Dilemma (Luck, 2009a) is a paradox concerning the moral permissibility of two types of acts performed within computer games. Some attempt to resolve the dilemma by finding a relevant difference between these two acts (Bartel, 2012; Patridge, 2013; Young, 2016; Nader, 2020; Kjeldgaard-Christiansen, 2020; and Milne & Ivankovic, 2021), or to dissolve the dilemma by arguing that the permissibility of these acts is not as they seem (Ali, 2015; Ramirez, 2020). More recently some have attempted to resist the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  22
    Undisruptable or stable concepts: can we design concepts that can avoid conceptual disruption, normative critique, and counterexamples?Björn Lundgren - 2024 - Ethics and Information Technology 26 (2):1-11.
    It has been argued that our concepts can be disrupted or challenged by technology or normative concerns, which raises the question of whether we can create, design, engineer, or define more robust concepts that avoid counterexamples and conceptual challenges that can lead to conceptual disruption. In this paper, it is argued that we can. This argument is presented through a case study of a definition in the technological domain.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18.  86
    Engineering the trust machine. Aligning the concept of trust in the context of blockchain applications.Eva Pöll - 2024 - Ethics and Information Technology 26 (2):1-16.
    Complex technology has become an essential aspect of everyday life. We rely on technology as part of basic infrastructure and repeatedly for tasks throughout the day. Yet, in many cases the relation surpasses mere reliance and evolves to trust in technology. A new, disruptive technology is blockchain. It claims to introduce trustless relationships among its users, aiming to eliminate the need for trust altogether—even being described as “the trust machine”. This paper presents a proposal to adjust the concept of trust (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  19.  27
    Ludic resistance: a new solution to the gamer’s paradox.Louis Rouillé - 2024 - Ethics and Information Technology 26 (2):1-11.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20.  34
    The impacts of AI futurism: an unfiltered look at AI's true effects on the climate crisis.Paul Schütze - 2024 - Ethics and Information Technology 26 (2):1-14.
    This paper provides an in-depth analysis of the impact of AI technologies on the climate crisis beyond their mere resource consumption. To critically examine this impact, I introduce the concept of AI futurism. With this term I capture the ideology behind AI, and argue that this ideology is inherently connected to the climate crisis. This is because AI futurism construes a socio-material environment overly fixated on AI and technological progress, to the extent that it loses sight of the existential threats (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  20
    Conceptualizing understanding in explainable artificial intelligence (XAI): an abilities-based approach.Timo Speith, Barnaby Crook, Sara Mann, Astrid Schomäcker & Markus Langer - 2024 - Ethics and Information Technology 26 (2):1-15.
    A central goal of research in explainable artificial intelligence (XAI) is to facilitate human understanding. However, understanding is an elusive concept that is difficult to target. In this paper, we argue that a useful way to conceptualize understanding within the realm of XAI is via certain human abilities. We present four criteria for a useful conceptualization of understanding in XAI and show that these are fulfilled by an abilities-based approach: First, thinking about understanding in terms of specific abilities is motivated (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22.  19
    Deconstructing controversies to design a trustworthy AI future.Francesca Trevisan, Pinelopi Troullinou, Dimitris Kyriazanos, Evan Fisher, Paola Fratantoni, Claire Morot Sir & Virginia Bertelli - 2024 - Ethics and Information Technology 26 (2):1-15.
    Technology policy needs to be receptive to different social needs and realities to ensure that innovations are both ethically developed and accessible. This article proposes a new method to integrate social controversies into foresight scenarios as a means to enhance the trustworthiness and inclusivity of policymaking around Artificial Intelligence. Foresight exercises are used to anticipate future tech challenges and to inform policy development. However, the integration of social controversies within these exercises remains an unexplored area. This article aims to bridge (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  35
    Percentages and reasons: AI explainability and ultimate human responsibility within the medical field.Eva Winkler, Andreas Wabro & Markus Herrmann - 2024 - Ethics and Information Technology 26 (2):1-10.
    With regard to current debates on the ethical implementation of AI, especially two demands are linked: the call for explainability and for ultimate human responsibility. In the medical field, both are condensed into the role of one person: It is the physician to whom AI output should be explainable and who should thus bear ultimate responsibility for diagnostic or treatment decisions that are based on such AI output. In this article, we argue that a black box AI indeed creates a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  34
    Explainable AI in the military domain.Nathan Gabriel Wood - 2024 - Ethics and Information Technology 26 (2):1-13.
    Artificial intelligence (AI) has become nearly ubiquitous in modern society, from components of mobile applications to medical support systems, and everything in between. In societally impactful systems imbued with AI, there has been increasing concern related to opaque AI, that is, artificial intelligence where it is unclear how or why certain decisions are reached. This has led to a recent boom in research on “explainable AI” (XAI), or approaches to making AI more explainable and understandable to human users. In the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  38
    The gamer’s dilemma: an expressivist response.Garry Young - 2024 - Ethics and Information Technology 26 (2):1-12.
    In this paper, I support a hybrid form of expressivism called constructive ecumenical expressivism (CEE) which I have previously used (to attempt) to resolve the gamer’s dilemma. (Young, 2016. Resolving the gamer’s dilemma. London: Palgrave Macmillan.) In support of CEE, I argue that the various other attempts at either resolving, dissolving or resisting the dilemma are consistent with CEE’s moral framework. That is, with its way of explaining what a claim to morality is, with how moral norms are established, with (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26.  10
    Cybernetic governance: implications of technology convergence on governance convergence.Andrej Zwitter - 2024 - Ethics and Information Technology 26 (2):1-13.
    Governance theory in political science and international relations has to adapt to the onset of an increasingly digital society. However, until now, technological advancements and the increasing convergence of technologies outpace regulatory efforts and frustrate any efforts to apply ethical and legal frameworks to these domains. This is due to the convergence of multiple, sometimes incompatible governance frameworks that accompany the integration of technologies on different platforms. This theoretical claim will be illustrated by examples such as the integration of technologies (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27.  53
    Embracing grief in the age of deathbots: a temporary tool, not a permanent solution.Aorigele Bao & Yi Zeng - 2024 - Ethics and Information Technology 26 (1):1-10.
    “Deathbots,” digital constructs that emulate the conversational patterns, demeanor, and knowledge of deceased individuals. Earlier moral discussions about deathbots centered on the dignity and autonomy of the deceased. This paper primarily examines the potential psychological and emotional dependencies that users might develop towards deathbots, considering approaches to prevent problematic dependence through temporary use. We adopt a hermeneutic method to argue that deathbots, as they currently exist, are unlikely to provide substantial comfort. Lacking the capacity to bear emotional burdens, they fall (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  31
    AI for crisis decisions.Tina Comes - 2024 - Ethics and Information Technology 26 (1):1-14.
    Increasingly, our cities are confronted with crises. Fuelled by climate change and a loss of biodiversity, increasing inequalities and fragmentation, challenges range from social unrest and outbursts of violence to heatwaves, torrential rainfall, or epidemics. As crises require rapid interventions that overwhelm human decision-making capacity, AI has been portrayed as a potential avenue to support or even automate decision-making. In this paper, I analyse the specific challenges of AI in urban crisis management as an example and test case for many (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  39
    Engineers on responsibility: feminist approaches to who’s responsible for ethical AI.Eleanor Drage, Kerry McInerney & Jude Browne - 2024 - Ethics and Information Technology 26 (1):1-13.
    Responsibility has become a central concept in AI ethics; however, little research has been conducted into practitioners’ personal understandings of responsibility in the context of AI, including how responsibility should be defined and who is responsible when something goes wrong. In this article, we present findings from a 2020–2021 data set of interviews with AI practitioners and tech workers at a single multinational technology company and interpret them through the lens of feminist political thought. We reimagine responsibility in the context (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  43
    Diversity and language technology: how language modeling bias causes epistemic injustice.Fausto Giunchiglia, Gertraud Koch, Gábor Bella & Paula Helm - 2024 - Ethics and Information Technology 26 (1):1-15.
    It is well known that AI-based language technology—large language models, machine translation systems, multilingual dictionaries, and corpora—is currently limited to three percent of the world’s most widely spoken, financially and politically backed languages. In response, recent efforts have sought to address the “digital language divide” by extending the reach of large language models to “underserved languages.” We show how some of these efforts tend to produce flawed solutions that adhere to a hard-wired representational preference for certain languages, which we call (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  42
    Is moral status done with words?Miriam Gorr - 2024 - Ethics and Information Technology 26 (1):1-11.
    This paper critically examines Coeckelbergh’s (2023) performative view of moral status. Drawing parallels to Searle’s social ontology, two key claims of the performative view are identified: (1) Making a moral status claim is equivalent to making a moral status declaration. (2) A successful declaration establishes the institutional fact that the entity has moral status. Closer examination, however, reveals flaws in both claims. The second claim faces a dilemma: individual instances of moral status declaration are likely to fail because they do (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32.  55
    Moral sensitivity and the limits of artificial moral agents.Joris Graff - 2024 - Ethics and Information Technology 26 (1):1-12.
    Machine ethics is the field that strives to develop ‘artificial moral agents’ (AMAs), artificial systems that can autonomously make moral decisions. Some authors have questioned the feasibility of machine ethics, by questioning whether artificial systems can possess moral competence, or the capacity to reach morally right decisions in various situations. This paper explores this question by drawing on the work of several moral philosophers (McDowell, Wiggins, Hampshire, and Nussbaum) who have characterised moral competence in a manner inspired by Aristotle. Although (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  27
    Intentional astrobiological signaling and questions of causal impotence.Chelsea Haramia - 2024 - Ethics and Information Technology 26 (1):1-9.
    My focus is on the contemporary astrobiological activity of Messaging ExtraTerrestrial Intelligence (METI). This intentional astrobiological signaling typically involves embedding digital communications in powerful radio signals and transmitting those signals out into the cosmos in an explicit effort to make contact with extraterrestrial others. Some who criticize METI express concern that contact with technologically advanced extraterrestrial life could be seriously harmful to Earth or humanity. One popular response to this critique of messaging is an appeal to causal impotence sometimes referred (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  43
    Why converging technologies need converging international regulation.Dirk Helbing & Marcello Ienca - 2024 - Ethics and Information Technology 26 (1):1-11.
    Emerging technologies such as artificial intelligence, gene editing, nanotechnology, neurotechnology and robotics, which were originally unrelated or separated, are becoming more closely integrated. Consequently, the boundaries between the physical-biological and the cyber-digital worlds are no longer well defined. We argue that this technological convergence has fundamental implications for individuals and societies. Conventional domain-specific governance mechanisms have become ineffective. In this paper we provide an overview of the ethical, societal and policy challenges of technological convergence. Particularly, we scrutinize the adequacy of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  49
    Socially disruptive technologies and epistemic injustice.J. K. G. Hopster - 2024 - Ethics and Information Technology 26 (1):1-8.
    Recent scholarship on technology-induced ‘conceptual disruption’ has spotlighted the notion of a conceptual gap. Conceptual gaps have also been discussed in scholarship on epistemic injustice, yet up until now these bodies of work have remained disconnected. This article shows that ‘gaps’ of interest to both bodies of literature are closely related, and argues that a joint examination of conceptual disruption and epistemic injustice is fruitful for both fields. I argue that hermeneutical marginalization—a skewed division of hermeneutical resources, which serves to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  92
    Ethics of generative AI and manipulation: a design-oriented research agenda.Michael Klenk - 2024 - Ethics and Information Technology 26 (1):1-15.
    Generative AI enables automated, effective manipulation at scale. Despite the growing general ethical discussion around generative AI, the specific manipulation risks remain inadequately investigated. This article outlines essential inquiries encompassing conceptual, empirical, and design dimensions of manipulation, pivotal for comprehending and curbing manipulation risks. By highlighting these questions, the article underscores the necessity of an appropriate conceptualisation of manipulation to ensure the responsible development of Generative AI technologies.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37.  37
    The conceptual exportation question: conceptual engineering and the normativity of virtual worlds.Thomas Montefiore & Paul-Mikhail Catapang Podosky - 2024 - Ethics and Information Technology 26 (1):1-13.
    Debate over the normativity of virtual phenomena is now widespread in the philosophical literature, taking place in roughly two distinct but related camps. The first considers the relevant problems to be within the scope of applied ethics, where the general methodological program is to square the intuitive (im)permissibility of virtual wrongdoings with moral accounts that justify their (im)permissibility. The second camp approaches the normativity of virtual wrongdoings as a metaphysical debate. This is done by disambiguating the ‘virtual’ character of ‘virtual (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  63
    AI and the need for justification (to the patient).Anantharaman Muralidharan, Julian Savulescu & G. Owen Schaefer - 2024 - Ethics and Information Technology 26 (1):1-12.
    This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  29
    Design culture for Sustainable urban artificial intelligence: Bruno Latour and the search for a different AI urbanism.Otello Palmini & Federico Cugurullo - 2024 - Ethics and Information Technology 26 (1):1-12.
    The aim of this paper is to investigate the relationship between AI urbanism and sustainability by drawing upon some key concepts of Bruno Latour’s philosophy. The idea of a sustainable AI urbanism - often understood as the juxtaposition of smart and eco urbanism - is here critiqued through a reconstruction of the conceptual sources of these two urban paradigms. Some key ideas of smart and eco urbanism are indicated as incompatible and therefore the fusion of these two paradigms is assessed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40.  32
    Correction to: Weapons of moral construction? On the value of fairness in algorithmic decision-making.Simona Tiribelli & Benedetta Giovanola - 2024 - Ethics and Information Technology 26 (1):1-1.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
 Previous issues
  
Next issues