9 found
Order:
Disambiguations
Roman Yampolskiy [18]Roman V. Yampolskiy [5]
See also
Roman Yampolskiy
University of Louisville
  1. Long-Term Trajectories of Human Civilization.Seth D. Baum, Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin & Roman V. Yampolskiy - 2019 - Foresight 21 (1):53-83.
    Purpose This paper aims to formalize long-term trajectories of human civilization as a scientific and ethical field of study. The long-term trajectory of human civilization can be defined as the path that human civilization takes during the entire future time period in which human civilization could continue to exist. -/- Design/methodology/approach This paper focuses on four types of trajectories: status quo trajectories, in which human civilization persists in a state broadly similar to its current state into the distant future; catastrophe (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  2.  3
    Do No Harm Policy for Minds in Other Substrates.Soenke Ziesche & Roman V. Yampolskiy - 2019 - Journal of Ethics and Emerging Technologies 29 (2):1-11.
    Various authors have argued that in the future not only will it be technically feasible for human minds to be transferred to other substrates, but this will become, for most humans, the preferred option over the current biological limitations. It has even been claimed that such a scenario is inevitable in order to solve the challenging, but imperative, multi-agent value alignment problem. In all these considerations, it has been overlooked that, in order to create a suitable environment for a particular (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - forthcoming - International Journal of Social Robotics:1-15.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2013 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we challenge (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  5.  8
    An AGI Modifying Its Utility Function in Violation of the Strong Orthogonality Thesis.James D. Miller, Roman Yampolskiy & Olle Häggström - 2020 - Philosophies 5 (40):40-0.
    An artificial general intelligence might have an instrumental drive to modify its utility function to improve its ability to cooperate, bargain, promise, threaten, and resist and engage in blackmail. Such an AGI would necessarily have a utility function that was at least partially observable and that was influenced by how other agents chose to interact with it. This instrumental drive would conflict with the strong orthogonality thesis since the modifications would be influenced by the AGI’s intelligence. AGIs in highly competitive (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  66
    Leakproofing the Singularity.Roman V. Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
    This paper attempts to formalize and to address the ‘leakproofing’ of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelligence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence.
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  7. The Technological Singularity.Jim Miller, Roman Yampolskiy, Stuart Armstrong & Vic Callaghan (eds.) - 2015
    "The idea that human history is approaching a singularity - that ordinary humans will someday be overtaken by artificially intelligent machines or cognitively enhanced biological intelligence, or both - has moved from the realm of science fiction to serious debate. Some singularity theorists predict that if the field of artificial intelligence continues to develop at its current dizzying rate, the singularity could come about in the middle of the present century. Murray Shanahan offers an introduction to the idea of the (...)
    No categories
     
    Export citation  
     
    Bookmark   1 citation  
  8.  48
    Leakproofing the Singularity Artificial Intelligence Confinement Problem.Roman Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
    This paper attempts to formalize and to address the 'leakproofing' of the Singularity problem presented by David Chalmers. The paper begins with the definition of the Artificial Intelligence Confinement Problem. After analysis of existing solutions and their shortcomings, a protocol is proposed aimed at making a more secure confinement environment which might delay potential negative effect from the technological singularity while allowing humanity to benefit from the superintelligence.
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  9.  4
    Understanding and Avoiding AI Failures: A Practical Guide.Robert Williams & Roman Yampolskiy - 2021 - Philosophies 6 (3):53.
    As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark