Switch to: References

Add citations

You must login to add citations.
  1. Human Brain Organoids: Why There Can Be Moral Concerns If They Grow Up in the Lab and Are Transplanted or Destroyed.Andrea Lavazza & Massimo Reichlin - 2023 - Cambridge Quarterly of Healthcare Ethics 32 (4):582-596.
    Human brain organoids (HBOs) are three-dimensional biological entities grown in the laboratory in order to recapitulate the structure and functions of the adult human brain. They can be taken to be novel living entities for their specific features and uses. As a contribution to the ongoing discussion on the use of HBOs, the authors identify three sets of reasons for moral concern. The first set of reasons regards the potential emergence of sentience/consciousness in HBOs that would endow them with a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Understanding Artificial Agency.Leonard Dung - forthcoming - Philosophical Quarterly.
    Which artificial intelligence (AI) systems are agents? To answer this question, I propose a multidimensional account of agency. According to this account, a system's agency profile is jointly determined by its level of goal-directedness and autonomy as well as is abilities for directly impacting the surrounding world, long-term planning and acting for reasons. Rooted in extant theories of agency, this account enables fine-grained, nuanced comparative characterizations of artificial agency. I show that this account has multiple important virtues and is more (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Tests of Animal Consciousness are Tests of Machine Consciousness.Leonard Dung - forthcoming - Erkenntnis:1-20.
    If a machine attains consciousness, how could we find out? In this paper, I make three related claims regarding positive tests of machine consciousness. All three claims center on the idea that an AI can be constructed “ad hoc”, that is, with the purpose of satisfying a particular test of consciousness while clearly not being conscious. First, a proposed test of machine consciousness can be legitimate, even if AI can be constructed ad hoc specifically to pass this test. This is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Preserving the Normative Significance of Sentience.Leonard Dung - 2024 - Journal of Consciousness Studies 31 (1):8-30.
    According to an orthodox view, the capacity for conscious experience (sentience) is relevant to the distribution of moral status and value. However, physicalism about consciousness might threaten the normative relevance of sentience. According to the indeterminacy argument, sentience is metaphysically indeterminate while indeterminacy of sentience is incompatible with its normative relevance. According to the introspective argument (by François Kammerer), the unreliability of our conscious introspection undercuts the justification for belief in the normative relevance of consciousness. I defend the normative relevance (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Profiles of animal consciousness: A species-sensitive, two-tier account to quality and distribution.Leonard Dung & Albert Newen - 2023 - Cognition 235 (C):105409.
    The science of animal consciousness investigates (i) which animal species are conscious (the distribution question) and (ii) how conscious experience differs in detail between species (the quality question). We propose a framework which clearly distinguishes both questions and tackles both of them. This two-tier account distinguishes consciousness along ten dimensions and suggests cognitive capacities which serve as distinct operationalizations for each dimension. The two-tier account achieves three valuable aims: First, it separates strong and weak indicators of the presence of consciousness. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • How to deal with risks of AI suffering.Leonard Dung - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation