Catching Treacherous Turn: A Model of the Multilevel AI Boxing
Abstract
With the fast pace of AI development, the problem of preventing its global catastrophic risks arises. However, no satisfactory solution has been found. From several possibilities, the confinement of AI in a box is considered as a low-quality possible solution for AI safety. However, some treacherous AIs can be stopped by effective confinement if it is used as an additional measure. Here, we proposed an idealized model of the best possible confinement by aggregating all known ideas in the field of AI boxing. We model the confinement based on the principles used in the safety engineering of nuclear power plants. We show that AI confinement should be implemented in several levels of defense. These levels include 1) AI design in fail-safe manner 2) limiting its capabilities, preventing self-improving and circuit breakers on treacherous turn 3) isolation from the outside world and, lastly, as the final hope 4) outside measures oriented on stopping AI in the wild. We demonstrate that the substantial number (more than 50 ideas listed in the article) of mutually independent measures could provide a relatively high probability of the containment of a human-level AI but may be not sufficient to preserve runaway of superintelligent AI. Thus, these measures will work only if they are used to prevent superintelligent AI creation, but not for containing superintelligence. We suggest that there could be a safe operation threshold, on which AI is useful, but is not able to hack containment system from the inside, the same way as a safe level of chain reaction inside nuclear power plants is maintained. However, overall, a failure of the confinement is inevitable, so we need to use the full AGI limited number of times (AI-ticks).Author's Profile
My notes
Similar books and articles
Global Solutions vs. Local Solutions for the AI Safety Problem.Alexey Turchin - 2019 - Big Data Cogn. Comput 3 (1).
Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
A quantitative safety assessment model for transgenic protein products produced in agricultural crops.John A. Howard & Kirby C. Donnelly - 2004 - Journal of Agricultural and Environmental Ethics 17 (6):545-558.
Leakproofing the Singularity.Roman V. Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
Animal confinement and use.Robert Streiffer & David Killoren - 2019 - Canadian Journal of Philosophy 49 (1):1-21.
Multilevel Strategy for Immortality: Plan A – Fighting Aging, Plan B – Cryonics, Plan C – Digital Immortality, Plan D – Big World Immortality.Alexey Turchin - manuscript
Cognitive confinement: theoretical considerations on the construction of a cognitive niche, and on how it can go wrong.Konrad Werner - 2019 - Synthese 198 (7):6297-6328.
Leakproofing the Singularity Artificial Intelligence Confinement Problem.Roman Yampolskiy - 2012 - Journal of Consciousness Studies 19 (1-2):194-214.
Liberty, beneficence, and involuntary confinement.Joan C. Callahan - 1984 - Journal of Medicine and Philosophy 9 (3):261-294.
The Use of Online Training Tools in Competition Cyclists During COVID-19 Confinement in Spain.Antonio Moreno-Tenas, Eva León-Zarceño & Miguel Angel Serrano-Rosa - 2021 - Frontiers in Psychology 12.
The Chromodielectric Soliton Model: Quark Self-Energy and Hadron Bags.Stephan Hartmann, Larry Wilets & Ping Tang - 1997 - Physical Review C 55:2067-2077.
Superintelligence as a Cause or Cure for Risks of Astronomical Suffering.Kaj Sotala & Lukas Gloor - 2017 - Informatica: An International Journal of Computing and Informatics 41 (4):389-400.
Analytics
Added to PP
2021-06-21
Downloads
191 (#67,712)
6 months
40 (#33,474)
2021-06-21
Downloads
191 (#67,712)
6 months
40 (#33,474)
Historical graph of downloads