4 found
Order:
  1. Risk of What? Defining Harm in the Context of AI Safety.Laura Fearnley, Elly Cairns, Tom Stoneham, Philippa Ryan, Jenn Chubb, Jo Iacovides, Cynthia Iglesias Urrutia, Phillip Morgan, John McDermid & Ibrahim Habli - manuscript
    For decades, the field of system safety has designed safe systems by reducing the risk of physical harm to humans, property and the environment to an acceptable level. Recently, this definition of safety has come under scrutiny by governments and researchers who argue that the narrow focus on reducing physical harm, whilst necessary, is not sufficient to secure the safety of AI systems. There is growing pressure to expand the scope of safety in the context of AI to address emerging (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2. From Pluralistic Normative Principles to Autonomous-Agent Rules.Beverley Townsend, Colin Paterson, T. T. Arvind, Gabriel Nemirovsky, Radu Calinescu, Ana Cavalcanti, Ibrahim Habli & Alan Thomas - 2022 - Minds and Machines 1 (4):1-33.
    With recent advancements in systems engineering and artificial intelligence, autonomous agents are increasingly being called upon to execute tasks that have normative relevance. These are tasks that directly—and potentially adversely—affect human well-being and demand of the agent a degree of normative-sensitivity and -compliance. Such norms and normative principles are typically of a social, legal, ethical, empathetic, or cultural nature. Whereas norms of this type are often framed in the abstract, or as high-level principles, addressing normative concerns in concrete applications of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3.  36
    Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective.Simon Burton, Ibrahim Habli, Tom Lawton, John McDermid, Phillip Morgan & Zoe Porter - 2020 - Artificial Intelligence 279 (C):103201.
  4. Distinguishing two features of accountability for AI technologies.Zoe Porter, Annette Zimmermann, Phillip Morgan, John McDermid, Tom Lawton & Ibrahim Habli - 2022 - Nature Machine Intelligence 4:734–736.
    Policymakers and researchers consistently call for greater human accountability for AI technologies. We should be clear about two distinct features of accountability.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark