Abstract
An advanced artificial intelligence (a “superintelligence”) could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed.
Similar content being viewed by others
Notes
Here I appeal to two theses defended by Bostrom in his recent book Superintelligence (Bostrom 2014): the strategic advantage thesis and the orthogonality thesis. The latter thesis is particularly important for the doomsday scenario discussed in the text. It maintains that pretty much any level of intelligence is compatible with pretty much any final goal. The thesis has been defended elsewhere as well (Bostrom 2012; Armstrong 2013).
The three leading examples are The Future of Humanity Institute based at Oxford University and headed by Nick Bostrom (see http://www.fhi.ox.ac.uk); the Centre for the Study of Existential Risk or CSER, based at Cambridge University (see http://cser.org); and the Machine Intelligence Research Institute or MIRI, not affiliated to any university but based out of Berkeley, CA (see http://intelligence.org). Only the latter dedicates itself entirely to the topic of AI risk. The other institutes address other potential risks as well.
In addition to Bostrom’s work, which is discussed at length below, there have been Eden et al. (2012); Blackford and Broderick (2014); Chalmers (2010), which led to a subsequent symposium double-edition of the same journal, see Journal of Consciousness Studies Volume 19, Issues 1&2.
I defend this argument from recent attacks on the “logical necessity” condition in Danaher (2014).
See fn 3 above for sources.
The argument for this is found in chapter 5 of Bostrom’s book.
This orthogonality thesis could be criticised. Some would argue that intelligence and benevolence go hand in hand, i.e. the more intelligent someone is the more likely they are to behave in a morally appropriate manner. I have some sympathy for this view. I believe that if there are genuine objectively verifiable moral truths, then the more intelligent the more likely they are to discover and act upon the moral truth. Indeed, this view is popular among some theists. For instance, Richard Swinburne has argued that omniscience may imply omnibenevolence. I am indebted to an anonymous reviewer for urging me to clarify this point.
They may not be if the designers themselves of malevolent goals, but that’s a distinct issue, having to do with our understanding of human agency, not superintelligent machine agency.
The leading critics in the academic literature are probably Ben Goertzel and Richard Loosemore; online, Alexander Kruel maintains a regularly updated blog critiquing the doomsday scenario. See http://www.kruel.co.
Bostrom (2014), p. 117 “One might think that the reasoning described above is so obvious that no credible project to develop artificial general intelligence could possibly overlook it. But one should not be too confident that this is so.” He then proceeds to give an example which suggests we may be overconfident in our inferences from past experiences.
Bostrom (2014), p. 117 “an unfriendly AI may become smart enough to realize that it is better off concealing some of its capability gains.” This could even involve adjusting its source code to deceive the testers.
Bostrom (2014), p. 119 “For example, an AI might not play nice in order that it be allowed to survive and prosper. Instead, the AI might calculate that if it is terminated, the programmers who built it will develop a new and somewhat different AI architecture, but one that will be given a similar utility function.”
Bostrom (2014), p. 113 and later at chapter 12 and the discussion of the value-loading problem.
To be clear, this does not mean that an infinitesimal probability of an existential risk should be taken seriously. But, say, a 0.05 or 0.1 risk may be sufficient, given what is at stake.
I refer to this as the “consequential critique” of sceptical theism in [reference omitted].
Schellenberg (2007) refers to beliefs of this sort as being forms of “ultimism”.
The one exception here might be beliefs about logical or mathematical truths, though there are theists who claim that those truths are dependent on God as well.
Note how the focus here is limited to how the treacherous turn affects inductive inferences we make about artificial intelligences only. It does not affect all inductive inferences. This is unlike the situation with respect to sceptical theism.
I am indebted to an anonymous reviewer for encouraging me to make this point.
This is the view of the Machine Intelligence Research Institute and some of its affiliated scholars, e.g. see Muehlhauser and Salamon (2012).
References
Almeida, M., & Oppy, G. (2003). Sceptical theism and evidential arguments from evil. Australasian Journal of Philosophy, 81, 496–516.
Anderson, D. (2012). Skeptical theism and value judgments. International Journal for the Philosophy of Religion, 72, 27–39.
Armstrong, S. (2013). General purpose intelligence: Arguing the orthogonality thesis. Analysis and Metaphysics, 12, 68–84.
Barrat, J. (2013). Our final invention: Artificial intelligence and the end of the human era. New York: St. Martin’s Press.
Bergmann, M. (2001). Skeptical theism and Rowe’s new evidential argument from evil. Nous, 35, 228.
Bergmann, M. (2009). Skeptical theism and the problem of evil. In T. P. Thomas & M. Rea (Eds.), The Oxford handbook of philosophical theology. Oxford: OUP.
Bergmann, M., & Rea, M. (2005). In defence of skeptical theism: A reply to Almeida and Oppy. Australasian Journal of Philosophy, 83, 241–251.
Bostrom, N. (2012). The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines, 22(2), 71–85.
Bostrom, N. (2013). Existential risk prevention as a global priority. Global Policy, 4, 15–31.
Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford: OUP.
Bringsjord, S., Bringsjord, A., & Bello, A. (2012). Belief in the singularity is fideistic. In A. Eden, J. Moor, J. Soraker, & E. Steinhardt (Eds.), Singularity hypotheses: A scientific and philosophical assessment. Dordrecht: Springer.
Danaher, J. (2014). Skeptical theism and divine permission: A reply to Anderson. International Journal for Philosophy of Religion, 75(2), 101–118.
Doctorow, C., & Stross, C. (2012). The rapture of the nerds. New York: Tor Books.
Dougherty, T. (2012). Recent work on the problem of evil. Analysis, 71, 560–573.
Dougherty, T., & McBrayer, J. P. (Eds.). (2014). Skeptical theism: New essays. Oxford: OUP.
Eden, A., Moor, J., Soraker, J., & Steinhardt, E. (Eds.). (2012). Singularity hypotheses: A scientific and philosophical assessment. Dordrecht: Springer.
Hasker, W. (2010). All too skeptical theism. International Journal for Philosophy of Religion, 68, 15–29.
Loosemore, R. (2012). The fallacy of dumb superintelligence. IEET. Retrieved October 31, 2014 http://ieet.org/index.php/IEET/more/loosemore20121128.
Loosemore, R. (2014). The Maverick Nanny with a CC: Debunking fallacies in the theory of AI motivation. IEET. Retrieved October 31, 2014 from http://ieet.org/index.php/IEET/more/loosemore20140724.
Lovering, R. (2009). On what god would do (2009). International Journal for the Philosophy of Religion, 66(2), 87–104.
Maitzen, S. (2013). The moral skepticism objection to skeptical theism. In J. McBrayer & D. Howard-Snyder (Eds.), A companion to the problem of evil. Oxford: Wiley.
McBrayer, J. (2010). Skeptical theism. Philosophy Compass, 5, 611–623.
Muehlhauser, L., & Salamon, A. (2012). Intelligence explosion: Evidence and import. In A. Eden, J. Moor, J. Soraker, & E. Steinhardt (Eds.), Singularity hypotheses: A scientific and philosophical assessment. Dordrecht: Springer.
Piper, M. (2008). Why theists cannot accept skeptical theism. Sophia, 47(2), 129–148.
Rowe, W. (1979). The problem of evil and some varieties of atheism. American Philosophical Quarterly, 16(4), 335–341.
Schellenberg, J. L. (2007). The wisdom to doubt. Ithaca, NY: Cornell University Press.
Sehon, S. (2010). The problem of evil: Skeptical theism leads to moral paralysis. International Journal for the Philosophy of Religion, 67, 67–80.
Street, S. (forthcoming). If there’s a reason for everything then we don’t know what reasons are: Why the price of theism is normative skepticism. In U. Bergmann, & Z. N. Kain (Eds.), Challenges to religious and moral belief: Disagreement and evolution. Oxford: OUP.
Trakakis, N. (2007). The god beyond belief: In defence of William Rowe’s argument from evil. Dordrecht: Springer.
Wielenberg, E. (2010). Sceptical theism and divine lies. Religious Studies, 46, 509–523.
Wielenberg, E. (2014). Divine deception. In T. Dougherty & J. P. McBrayer (Eds.), Skeptical theism: New essays. Oxford: OUP.
Wykstra, S. (1996). Rowe’s noseeum arguments from evil. In D. Howard-Snyder (Ed.), The evidential argument from evil. Bloomington, IN: Indiana University Press.
Yampolskiy, R. (2012). Leakproofing the singularity. Journal of Consciousness Studies, 19, 194–214.
Yudkowsky, E. (2008). Artificial intelligence as a positive and negative factor in global risk. In N. Bostrom & M. Cirkovic (Eds.), Global catastrophic risks. oxford: OUP.
Acknowledgments
I would like to thank Stephen Maitzen, Felipe Leon and Alexander Kruel for conversations and feedback on some of the ideas in this paper. I would also like to thank an anonymous reviewer for helpful criticism on a previous draft.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Danaher, J. Why AI Doomsayers are Like Sceptical Theists and Why it Matters. Minds & Machines 25, 231–246 (2015). https://doi.org/10.1007/s11023-015-9365-y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11023-015-9365-y