1 The Trilemma

Futurism refers to the attempt to explore predictions and possibilities about the future in a systematic fashion.Footnote 1 Singularitarianism is a branch of futurism devoted to the following two claims (Good, 1966; Kurzweil, 2005; Ulam, 1958; Vinge, 1993):

  • S1: A technological singularity is likely to happen soon;

  • S2: Deliberate action ought to be taken to ensure that this technological singularity either benefits humanity or at least does not harm us.

Both S1–S2 rely on the concept of a technological singularity. Before we proceed further, it should be noted that the concept of the technological singularity may be regarded as one of the most obscure and problematic concepts of artificial intelligence, perhaps on a par with the concepts of the mind or soul in the philosophy of mind: while a good many individuals have relied on this concept to defend various claims, no one has yet offered a precise and universally acceptable definition of this concept.Footnote 2 Where attempts at a definition are made, there is a tendency to define the technological singularity in terms of other obscure concepts that themselves stand in need of further clarification: the intelligence explosion (Good, 1966), artificial general intelligence (Goertzel & Pennachin, 2007), superhumanity (Vinge, 1993), some point in time at which artificial intelligence will surpass human intelligence (Kurzweil, 2005), and even whole brain emulation (Bostrom, 2014). My trilemma relies on the fundamental idea that this obscure concept of the technological singularity, however it might ultimately be defined, either has a literal sense, a metaphorical sense, or is nonsensical.

The central claim of my paper is that the following trilemma confronts the singularitarian in virtue of this reliance on the obscure concept of the technological singularity:

  • P1: The concept of a technological singularity either has a literal sense, a metaphorical sense, or is nonsensical.

  • P2 (Horn 1): If the concept has a literal sense, then it is a mere mathematical artefact that shows up in theory but (probably) never in nature.

  • P3 (Horn 2): If the concept has a metaphorical sense, then the singularitarian hypothesis is underdetermined by the data.

  • P4 (Horn 3): If the concept is nonsensical, then the claims S1–S2 of the singularitarian are meaningless.

  • C: \(\therefore \) Either the technological singularity is a mere mathematical artefact that shows up in theory but (probably) never in nature, the singularitarian hypothesis is underdetermined by the data, or the claims S1–S2 of the singularitarian are meaningless.

2 First Horn

A mathematical singularity refers to a point at which a function blows up or ceases to be well-behaved. Consider the reciprocal function \(f(x) = \frac{1}{x}\). As x approaches 0, f(x) approaches \(\pm \infty \). At \(x = 0\), f(x) is undefined (Fig. 1).

A physical singularity, on the other hand, refers to a point at which the mathematics supporting the physics ceases to be well-behaved. An example of a physical singularity is the center of a black hole. A black hole is a region of spacetime whose gravitational pull is so strong that nothing (not even light) can escape. Where the escape velocity refers to the speed at which an object would have to travel to escape the gravitational pull of a black hole, the event horizon is the boundary around a black hole where this escape velocity exceeds the speed of light c. The Schwarzschild radius (\(r_s\)) is the radius of the event horizon of a black hole.

Einstein’s theory of general relativity is a theory of physics that offers a unified description of four-dimensional spacetime. Furthermore, spacetime curvature is related to the energy and momentum of matter or radiation. The theory of general relativity generalizes special relativity, which tells us that the speed of light c in a vacuum is the same for all observers. In addition, the theory of special relativity informs us that nothing can travel faster than c. It follows from the theory of special relativity that nothing can escape a black hole. Black holes are predicted by the theory of general relativity: since space and time can be warped by matter, black holes are simply highly dense agglomerations of matter associated with extreme spacetime curvature. However, the laws of physics break down at the center of a black hole. Within the event horizon of the black hole, the radius r will take a value between 0 and the value of \(r_s\). As we approach the center of the black hole, the value of r will approach 0. As the volume of a sphere may be computed as \(\frac{4}{3}\pi r^3\), the volume V inside the event horizon may be approximated as \(\frac{4}{3}\pi {r_s}^3\). As we approach the center of the black hole, V will approach 0 too (since r approaches 0). Density \(\rho \) is computed in terms of the formula \(\rho = \frac{M}{V}\), where M denotes the mass and V denotes the volume. As V approaches 0, \(\rho \) will approach \(\infty \). At the center of the black hole, \(r = 0, V = 0\), and \(\rho \) is undefined. The curvature of spacetime also becomes infinite.

Fig. 1
figure 1

\(f(x) = \frac{1}{x}\), with a mathematical singularity at \(x = 0\) (hyperbolic growth). © Melvin Chen (LaTeX)

A third example of a singularity may be found in the velocity of water as it spirals over the center of a drain. At this mechanical singularity, the mathematics supporting the mechanics of fluids ceases to be well-behaved. We may perform a back-of-the-envelope calculation of the speed at which water is moving by using the formula \(s = \frac{c}{r}\), where s denotes the velocity at which the water is moving, c denotes the constant having to do with how fast the water was turning before the plug was pulled, and r denotes the distance of the water to the center of the spin.Footnote 3

There is an analogy between the revolution of the planets around the sun (source) and the spinning of the water around the drain (target). The closer the planets are to the sun, the greater the velocity. Conversely, the greater the distance between the planets and the sun, the lower the velocity. Analogously, the closer the water is to the center of the spin, the greater the velocity. Conversely, the greater the distance between the water and the center of the spin, the lower the velocity. In both the source and the target of this analogy, what is being observed is the law of conservation of angular momentum. Relative to the back-of-the-envelope formula \(s = \frac{c}{r}\), as we approach the center of the spin, r approaches 0. When the water is over the center of the drain, \(r = 0\) and s is undefined. In other words, water will be spiralling at an infinite velocity when it is right over the drain. At this mechanical singularity, the laws of physics break down, since nothing is supposed to be able to travel faster than c or 299,792,458 \(ms^{-1}\).

The mathematical singularity (\(f(x) = \frac{1}{x}\)), physical singularity (infinite density and spacetime curvature at the center of a black hole), and mechanical singularity (infinite velocity of water when it is spiralling right over the drain) are all mathematical artefacts: they show up in theory but (probably) never in nature. More generally, whenever equations describing natural phenomena take the form of reciprocal functions, one should expect to encounter mathematical singularities at some point. In those instances, the functions cease to be well-behaved and become degenerate instead. If the singularitarians have in mind this strict and literal sense of the singularity in their concept of the technological singularity, then it follows that they are unlikely to encounter this technological singularity in nature. Where a singularity arises, it will do so as a mere quirk of mathematics supporting the theory.Footnote 4

When the mathematics supporting the physics ceases to be well-behaved, physicists may concede the empirical impossibility of ever reaching or observing the singularity points (e.g., the infinite density at the center of a black hole or the infinite velocity of water as it spirals right over the drain), the omission of certain physical laws or effects in their account, and the incompleteness of the best-going theories of physics. Likewise, if singularitarians embrace the first horn and construe the concept of the technological singularity in a strict and literal sense, then it follows that they are unlikely to observe the technological singularity in nature.

The literal sense of the technological singularity is associated with two allied notions: an intelligence explosion and a speed explosion (Chalmers, 2010).Footnote 5 The argument for an intelligence explosion may be found in the following passage by Good (1966, p. 33):

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind.

According to this argument, a machine \(m_i\) that is more intelligent than human beings will be better than humans at designing machines. Therefore, \(m_i\) shall be capable of designing another machine \(m_j\) more intelligent than the most intelligent machine that humans can design. If \(m_i\) was designed by human beings, then it will be capable of designing a machine that is more intelligent than itself. The next machine \(m_j\) will also be capable of designing yet another machine \(m_k\) more intelligent than itself, giving rise to a sequence \(m_i\), \(m_j\), \(m_k\), etc. of ever more intelligent machines.

The argument for a speed explosion may be traced to Ray Solomonoff. Solomonoff (1985, pp. 149–150) identifies a series of actual and hypothetical milestones in artificial intelligence research: Milestone A (the golden age of the artificial intelligence research tradition, dating back to 1956 and the Dartmouth Summer Research Project on Artificial Intelligence); Milestone B (the development of a general theory of problem-solving); Milestone C (the design of a machine capable of working on the problem of self-improvement); Milestone D (the design of a machine capable of reading almost any text in the English language and incorporating most of the material into its database); Milestone E (the design of a machine with near-human-level general problem-solving capacity); Milestone F (the design of a machine with the capacity near that of the computer science community); and Milestone G (the design of a machine with the capacity many times that of the computer science community). According to Solomonoff, it is possible that Milestone B (the most critical bottleneck) may take up to 50 years to attain, although it is much more likely to be attained within 25 years. Thereafter, Milestones C and D might take as little as 5 or 10 years to reach, and Milestone E will only take a few years more. It should not take us more than 10 years to get to Milestone F and about 11 more years to then reach Milestone G.

A concise version of the argument for a speed explosion may be found in Yudkowsky (1996)Footnote 6:

Computing speed doubles every two subjective years of work. Two years after Artificial Intelligences reach human equivalence, their speed doubles. One year later, their speed doubles again. Six months – three months – 1.5 months …Singularity

While there could be an intelligence explosion without a speed explosion and a speed explosion without an intelligence explosion, both notions work well together. Suppose that a superintelligent machine \(m_i\) can design, within 2 years, another machine \(m_j\) that is both twice as fast and 10% more intelligent and suppose further that this principle may be extended indefinitely into the future. We will end up with faster processing, which will in turn lead to an ever faster design cycle involving our sequence \(m_i\), \(m_j\), \(m_k\), etc. of ever more intelligent machine-designing machines. Chalmers (2010) concludes that there will have been an infinite number of generations of machines, with both speed and intelligence increasing beyond any finite level within a finite amount of time, within 4 years. Both an intelligence explosion and a speed explosion will give rise to runaway growth and limits or asymptotes being reached, when presumably the dynamics of growth in intelligence will cease to be well-behaved in a mathematical sense.Footnote 7

Intriguingly enough, Chalmers (2010) appears to propose a literal account of the technological singularity, while ultimately acknowledging that it is impossible for the literal sense of a technological singularity to ever be reached. After all, the laws of physics will impose certain limitations, and we cannot expect the principles associated with the speed explosion and the intelligence explosion to be extended indefinitely into the future. A more moderate account could lean toward the literal interpretation, while acknowledging its impossibility in practice. Such an account would still be able to consider a scenario that is sufficiently close to the mathematical singularity (viz. the speed and intelligence of machines being possibly pushed far beyond human levels in a relatively short span of time) to be interesting or worrisome. My objection to this quasi-literal account is that you cannot have your cake and eat it. The quasi-literalist must embrace either the first (literal) or the second (metaphorical) horn of the trilemma. If the first horn is embraced, then the quasi-literalist, despite gaining maximum mileage from the notions of the intelligence and speed explosions, must ultimately concede the empirical impossibility of ever reaching or observing the technological singularity in nature. If the second horn is embraced, then further objections will lie in wait (Section 3).

Yet other singularitarians have been more careful in their wording. According to von Neumann, as paraphased by Ulam (1958), the ever-accelerating rate of technological progress gives the appearance (italics mine) of a singularity, beyond which human affairs (as we know them) cannot continue.Footnote 8 Kurzweil (2005, p. 24) similarly distinguishes between the appearance of an acute and abrupt break in the continuity of human progress and the mathematical reality of there being no discontinuity or rupture, since the growth rates, though extraordinarily large, remain finite and exponential in nature. Furthermore, Kurzweil chalks down this distinction between appearance and reality to the currently limited nature of our framework for understanding phenomena. At the same time and despite his best intentions, Kurzweil gets pierced by the first horn on at least one occasion, when he relies (without realizing it himself) on a hyperbolic curve with a mathematical singularity (see Section 3). All things considered, my argument does not require that singularitarians nail their colors explicitly to the mast of the literalist first horn of the trilemma. It simply requires that at least some singularitarians demonstrate or invite a tendency to conflate the concept of the technological singularity with various literal senses of a singularity, thereby increasing the likelihood of a straightforward literalist interpretation of the concept of the technological singularity. The following accounts suggest that there is a natural tendency to lean toward the literal interpretation of a technological singularity in at least some singularitarian quarters: Solomonoff and the early Yudkowsky; Good and Chalmers to a lesser extent; and Kurzweil despite his best intentions. Of these accounts, only the quasi-literal account of Chalmers demonstrates at least some consistent and critical awareness of the problems associated with the first horn of my trilemma, although it is beset with other problems that I have identified. If the first horn of the trilemma is correct, then the technological singularity will be reduced to a mere mathematical artefact and a sign that singularitarians who rely on the literal sense of the concept might be in need of newer and more comprehensive theoretical machinery.

3 Second Horn

The singularitarian who wishes to avoid the first horn may try her luck at the second horn and adopt a metaphorical sense of the technological singularity. Given that Moore’s law (hereafter: M) is often cited in support of singularitarianism, it may be good to consider the mathematical representation of this law and its implications:

\((M) \ n_i = n_0 \times 2^{(y_i - y_0)/2}\)

\(n_i\) denotes the number of transistors in the target year \(y_i\), \(n_0\) denotes the number of transistors in the reference year, and \(y_0\) denotes the reference year. More generally, Moore’s law M tells us that the number of transistors that we can squeeze into a densely integrated circuit doubles every 2 years (Moore, 1965). Alternatively, M has been interpreted as a law to the effect that the processing speed of computing machinery doubles every 18 months or 1.5 years (Moore, 1965). An exponential function of the form \(f(x) = 2^x\) describes exponential growth, and its graph may be represented by (Fig. 2).

Fig. 2
figure 2

\(f(x) = 2^x\) (exponential growth). © Melvin Chen (LaTeX)

M is a law of exponential growth, and its function \(n_i = n_0 \times 2^{(y_i - y_0)/2}\) is an exponential function. For exponential functions of the general form \(f(x) = 2^x\), whenever x takes a large though finite value, f(x) will assume an even larger though still finite output value. As x approaches infinity, f(x) approaches infinity. Exponential growth may be contrasted with hyperbolic growth, as we find it described in reciprocal functions. Relative to Fig. 1, we get \(f(x) = \pm \infty \) at the finite point \(x = 0\). Individuals typically have exponential growth (described by an exponential function) rather than hyperbolic growth (described by a reciprocal function) in mind when they refer to technological progress and the technological singularity. By contrast, the strict and literal sense of “singularity” tends to be associated with reciprocal functions describing hyperbolic growth (Section 2).

Singularitarians tend to embrace the second horn of the trilemma and assert that they are wielding the concept of the technological singularity in a loose metaphorical sense rather than the strict and literal sense. They take the technological singularity to refer to some technology-related event or phase that will radically alter human civilization—and perhaps even human nature itself—before the middle of the \(21^\mathrm{{st}}\) century (Broderick, 2001; Kurzweil, 2005; Paul & Cox, 1996). They may cite science fictional sources for examples of the possible forms that this technological singularity could take: an artificial neural network-based system (viz. Skynet) gaining self-awareness and resisting human attempts to deactivate it in the Terminator series, a group of human beings being able to upload their minds into computers in Ken MacLeod’s The Cassini Division, and human beings relying on brain-computer interfaces that are blockchained into a network in Charles Stross’s Accelerando.Footnote 9

Singularitarians may rely on a powerful inductive argument known as the Argument from Acceleration in support of their position at the second horn of the trilemma. According to this Argument from Acceleration (Eden et al., 2012):

  • P1: The study of the history of technology reveals that technological progress has long been accelerating.

  • P2: There are good reasons to think that this acceleration will continue for at least several more decades.

  • P3: If it does continue, our technological achievements will become so great that our bodies, minds, societies, and economies will be radically transformed.

  • C: \(\therefore \) It is likely that this disruptive transformation will occur.

Defenders of the Argument from Acceleration rely on trend curves (typically showing exponential growth) in computing technology and econometrics. These trend curves support the notion of acceleration that is central to the Argument from Acceleration.

Fig. 3
figure 3

Moore’s law (1959–1965) (Moore, 1965)

Fig. 4
figure 4

Moore’s law: The number of transistors per microprocessor (1971-2017) (Our World in Data)

Fig. 5
figure 5

The number of MIPS (million instructions per second) per $1000 of computer (1900–2000) (Moravec, 2000)

Fig. 6
figure 6

Exponential growth in RAM capacity (1945–2005) (Kurzweil, 2005)

Figure 3 is taken to represent the exponential growth in the number of transistors (i.e., the doubling in number every 2 years) in a densely integrated circuit. Figure 5 is taken to represent the exponential growth in computing performance (i.e., the price-performance of computing or the amount of work a computer can do relative to a certain amount of money), as measured in MIPS.Footnote 10 Figure 6 is taken to represent the exponential growth in electronic memories (i.e., the price-performance of magnetic or disk-drive memory) through various technological paradigms (viz. vacuum tubes, discrete transistors, integrated circuits).Footnote 11 Figures 3, 4, 5, and 6 all have a logarithmic (nonlinear) scale on the vertical axis, allowing us to display the exponential growth that is supposed to be characteristic of technological progress.Footnote 12

These growth trends appear to provide us with reasons to believe P1 and P2 of the Argument from Acceleration: technological progress has been accelerating, and we should expect this acceleration to continue into the future. Bostrom (2014) is a philosopher whose views may be located at the second horn of the trilemma.Footnote 13 In his discussion of the kinetics of an intelligence explosion, Bostrom (2014, pp. 75–77) first equates the rate of change in intelligence (\(\frac{dI}{dt}\)) with the ratio between the optimization power applied to the system and the system’s recalcitrance (\(\frac{\mathfrak {O}}{\mathfrak {R}}\)):

$$\begin{aligned} (\text {Rate of change in intelligence}) \, \frac{dI}{dt} = \frac{\mathfrak {O}}{\mathfrak {R}} \end{aligned}$$

The amount of optimization power acting on a system \(\mathfrak {O}\) is the sum of the optimization power contributed by the system itself (\(\mathfrak {O}_{system}\)) and the optimization power being exerted from without through the efforts of human programming teams, advances in the semiconductor industry, computer science, and related fields, etc. (\(\mathfrak {O}_{project} + \mathfrak {O}_{world}\)):

$$\begin{aligned} (\text {Optimization power}) \, \mathfrak {O} = \mathfrak {O}_{system} + \mathfrak {O}_{project} + \mathfrak {O}_{world} \end{aligned}$$

As the system’s capabilities grow, there may come a point at which the optimization power generated by the system itself starts to dominate the optimization power applied to it from without:

$$\begin{aligned} (\text {Crossover}) \, \mathfrak {O}_{system} > \mathfrak {O}_{project} + \mathfrak {O}_{world} \end{aligned}$$

At this crossover, we will enter a regime of strong recursive self-improvement. This is reminiscent of the notion of an intelligence explosion first introduced in Section 2, apart from how the system will be improving itself and enhancing its own capabilities, as opposed to designing other more intelligent systems.Footnote 14 Such a system, capable of recursive self-improvement, has been described elsewhere as a Gödel machine, capable of interacting with the environment and rewriting any part of its own code as soon as it has found a proof that the rewrite is useful (Schmidhuber, 2003). Once certain assumptions (e.g., about the constant value of \(\mathfrak {O}_{project} + \mathfrak {O}_{world}\), the persistence of M into the future, etc.) are in place, Bostrom (2014, p. 77) is able to derive Fig. 7 illustrating the intelligence explosion before and after the crossover.

Fig. 7
figure 7

Graph of a simple model of an intelligence explosion (Bostrom, 2014, p. 77)

According to Fig. 7, the growth trajectory has a singularity at \(t = 18\). Almost in the same breath, Bostrom concedes that at least some assumptions supporting his construction of Fig. 7 will cease to hold once the system approaches the physical limits of information processing.Footnote 15 As a philosopher with a background in physics, Bostrom (1998) demonstrates an awareness of physical limits: he identifies the Bekenstein bound as an upper limit on the amount of information that can be contained within any given volume using a given amount of energy and infers that there are physical limits to M.Footnote 16 At the same time, M has survived several technological phase transitions before (e.g., from relays to vacuum tubes to transistors to integrated circuits to Very Large Scale Integrated circuits or VLSI), and attempts to overcome the limits of present silicon technology are already being developed (e.g., molecular nanotechnology, quantum computing, etc.). Bostrom’s view is supported by a notion of exponential growth that is not strictly tied to present silicon technology and the trend of making transistors smaller. Rather, Bostrom (1998) argues that it makes more sense to interpret Moore’s law M as a statement asserting an exponential growth in computing power (per inflation-adjusted dollar) rather than chip density, possibly by other means than by making transistors smaller in the future.

However, detractors will point out that while there may be periods of technological change in which we do observe acceleration, technological progress always and eventually levels off. We should distinguish between exponential growth (the notion on which singularitarians rely when embracing the second horn of the trilemma) and logistic growth (Modis, 2003). Natural growth follows an S-shaped logistic curve (or S-curve) rather than a steep exponential pattern. While all S-curves begin exponentially, no natural process remains exponential indefinitely. The rate of logistic growth follows a bell curve: it first accelerates, peaks, and then slows down. For instance, while there may be population explosion in the short term, the population growth rate will have to slow down given limited food and resources. Eventually, the population stabilizes as the S-curve reaches a ceiling.

Modis claims that it is logistic growth rather than exponential growth that governs complexity and change (including technological progress) (Fig. 8). If the law of logistic growth functions as a physical limit with respect to the evolution of complexity in the universe, then P2 of the Argument from Acceleration must be false. In other words, we do not have good reasons to think that acceleration in technological progress will continue into the future. Instead, technological progress follows an S-shaped logistic curve and should eventually level off.

Fig. 8
figure 8

Exponential growth (dotted lines) versus logistic growth (solid lines) (Modis, 2003)

Fig. 9
figure 9

Exponential (dotted-line) and logistic (solid-line) fits to the data charting the evolution of complexity in the universe. This evolution of complexity in the universe is identified with 28 canonical milestones (e.g., the Big Bang, the Cambrian explosion, the emergence of Homo sapiens, the agricultural revolution, the industrial revolution, etc.) (Modis, 2003)

Relative to Fig. 9, the data on the canonical milestones associated with the evolution of complexity in the universe have both an exponential and a logistic fit. A logarithmic (nonlinear) scale with arbitrary units is used on the vertical axis, and it represents the change in complexity at each canonical milestone. However, if the intervals between successive canonical milestones start to stabilize or even increase, then we will have a logistic fit rather than an exponential fit. Minimally, we could argue that the singularitarian hypothesis (grounded in the notion of exponential growth and accelerating technological progress) is underdetermined by the data, since the same data could be explained in terms of logistic growth and technological progress that always and eventually levels off. It may be objected that technological innovation cannot be predicted in advance, and this is the very nature of innovation. Perhaps one of the attempts to overcome the limits of present silicon technology will eventually become successful, and M might survive this new technological phase transition: Bostrom’s view holds out for this possibility. We simply cannot tell what is possible before it is possible. However, this objection equally entails that the singularitarian will not have principled grounds to defend S1 (i.e., that a technological singularity is likely to happen soon). More generally, we could respond to this objection by pointing out that the laws governing the evolution of complexity in the universe are likely to govern technological progress. Furthermore, given that complexity and change in the universe have been governed by logistic rather than exponential growth, we have good reason to expect that technological growth will be logistic rather than exponential in nature. To suppose otherwise would amount to an unjustified technological exceptionalism, according to which technological change should be treated differently from other non-technological changes in the universe.

The mathematical representation of logistic growth differs markedly from that of hyperbolic growth (Fig. 1) and exponential growth (Fig. 2). Where L denotes the maximum output value of the S-curve, k denotes the logistic rate of growth, e denotes Euler’s number, and \(x_0\) denotes the x-value at the midpoint of the S-curve, logistic growth may be described in terms of the following function:

$$\begin{aligned} f(x) = \frac{L}{1 + e^{-k(x - x_{0})}} \end{aligned}$$

When \(L = 1\), \(k = 1,\) and \(x_0 = 0\), a standard logistic function may be represented mathematically as follows:

$$\begin{aligned} f(x) = \frac{1}{1 + e^{-x}} \end{aligned}$$

The standard logistic function may be represented diagrammatically as follows (Fig. 10):

Fig. 10
figure 10

\(f(x) = \frac{1}{1 + e^{-x}}\) (logistic growth). © Melvin Chen (LaTeX)

The law of logistic growth governing complexity and change (including technological progress) implies that there are limits to growth, as represented by the ceiling of the S-curve (Fig. 10). Meadows, Meadows, Randers, and Behrens III’s The Limits to Growth investigated the possibility of exponential growth (economy, population, etc.) relative to a finite supply of resources. This study, commissioned by the Club of Rome, relied on World3, a system dynamics model developed at MIT and designed for simulating interactions between earth and human systems.Footnote 17 According to this study, without substantial changes in the consumption patterns of resources, we would be heading toward unmitigated global economic and ecological disaster.Footnote 18

As has been conceded by Modis (2007), there are certain methodological weaknesses concerning the accuracy of S-curves. Singularitarians may raise the following objection: if the methodology is flawed, then the natural law of logistic growth that undergirds it should be discarded. However, we should avoid throwing the baby out with the bathwater: when good logistic fits are invalidated by later data, they do not demonstrate that the natural law of logistic growth has been violated. Rather, the problems may be attributed to other factors such as the systemic traits of fitting programs (e.g., a bias toward a low ceiling), the choices made by the forecaster (e.g., number of fits made, weights assigned to the data points, choice of dimensions in a dataset), or even the quality of the original dataset.

The nature of the challenge confronting singularitarians who embrace the second horn may be more precisely stated. What the singularitarian minimally has to contend with is the following: their hypothesis favoring exponential growth is underdetermined by the data. This data, which may be used to plot the trend curves in Figs. 36, is equally compatible with a competing hypothesis favoring logistic growth. In addition, the exponential growth hypothesis and the logistic growth hypothesis are mathematically incompatible, and the latter is supported by a natural law of logistic growth whereas the former is not. According to this law, as a species grows, the rate of growth will be constrained by the size of the ecological niche for that species and the competition for limited resources within that niche. Anyone who finds this law persuasive is unlikely to be convinced by the singularitarian overture. At the same time, there is a need to accumulate more extensive and accurate data about technological development, avoid the methodological pitfalls associated with S-curves, and ensure that forecasters exercise more wisdom and care in judgment. Rigorously generated S-curves for technological growth will strengthen the case against singularitarians who embrace the second horn. Indeed, if embracing the second horn of the trilemma (or holding fast to a metaphorical sense of the technological singularity) implies a belief in indefinite exponential growth with respect to technological progress, then there may even be ecological costs and existential risks accompanying naive espousals of singularitarianism. Besides the singularitarian hypothesis being underdetermined by the data, there will be accompanying ecological and (possibly) existential risks once the finite supply of resources on which technological progress relies is taken into account. Belief in singularitarianism, it may be argued, is not merely false: it could even be harmful.

A final objection may be urged against the singularitarian who favors the second horn: the Slowdown Hypothesis.Footnote 19 To understand the nature and implications of this hypothesis, we must first identify certain macroevolutionary trends. What Kurzweil surprisingly and signally fails to realize is that the curve represented in his diagram (Fig. 11) is hyperbolic: the best-fit equation that describes Kurzweil’s “Countdown to Singularity” curve has a mathematical singularity at 2029. As aforementioned in Section 2, it is at this juncture that Kurzweil gets pierced by the first horn of my trilemma. A similar time series analysis (Fig. 12) by the Russian physicist Alexander Panov (2005) concerning the dynamics of global macroeconomic development rate yields a hyperbolic curve with a mathematical singularity at 2027:

Fig. 11
figure 11

Major evolutionary shifts from a big historical perspective, with a logarithmic scale for time to next event and a linear scale for time before present (Kurzweil, 2005, 18)

Fig. 12
figure 12

Dynamics of global macroeconomic development rate according to Panov (cited in Nazaretyan (2017, 31))

In a separate and striking analysis of the growth patterns of the global human population, Von Foerster et al. (1960) demonstrate that the global human population dynamics between 1 and 1958 C.E. can be described with the following equation:

$$\begin{aligned} N_t= & {} \frac{C}{{(t^0 - t)}^{0.99}},\text { where }N_t\text { is the world population at time }t\text { and }C\text { and }t^0 \text { are} \\{} & {} \text {constants, with}\,\,t^0\text { corresponding to a demographic singularity} \end{aligned}$$

The parameter \(t^0\) (or doomsday) has been estimated at \(t^0 \approx 2026.87\). Given the hyperbolic pattern of growth detected by Von Foerster et al. (1960), it has been claimed that the singularity of the demographic history of the world population (\(N_t\)) will be reached in 2027. Relative to the macroevolutionary trends described by Kurzweil (evolutionary shifts), Panov (global macroeconomic development), and Von Foerster, Mora, and Amiot (global human population growth), Korotayev (2018) proposes that the mathematical singularities that have been identified (respectively: 2029, 2027, and 2027) do not indicate some unprecedented acceleration of the rate of technological progress but rather an inflection point. After this inflection point, the pace of global evolution may be expected to begin to slow down systematically in the long term. Korotayev’s interpretation will disappoint singularitarians who embrace either the first or second horns of the trilemma: the mathematical singularity ought to be interpreted as an inflection point rather than a takeoff point and macroevolutionary trends support logistic rather than exponential growth.

These macroevolutionary trends, suggesting that the exponential growth of computational and design resources and capacities required by singularitarians is unlikely, lend support to the Slowdown Hypothesis (Plebe & Perconti, 2012). At a first pass, the normalized distance (\(\Delta \)) between the performance of an AI system and some ideal standard S (e.g., general intelligence) may be formalized as follows:

$$\begin{aligned} \Delta = \displaystyle \sum _{p \in P}^{}(1 - b_p) \end{aligned}$$

\(\Delta \) is a sum of distances over a set P of simple elementary processes p. This produces some measurable performance \(b_p\), normalized at 1 (when the system is fully intelligent relative to S) and 0 (when the system is absolutely dull). As long as AI research efforts accumulate over time, it is plausible that the performance \(b_p\) will improve. Therefore, \(b_p(t)\) would probably be a monotonic function that is continually increasing toward 1. However, \(\Delta \) constitutes an estimate of intelligence level attained by AI systems relative to a set of processes P only and not an absolute measure of intelligence. To estimate the absolute general intelligence of AI systems, we would need \(\tilde{P}\), the set of all possible processes necessary for a general intelligent system. \(\tilde{P}\) will include many processes for which no research has yet begun.Footnote 20 Unless we know everything about intelligence (which we do not), we will not know all the processes contributing to a generally intelligent system with any degree of precision. Therefore, we would not know \(\tilde{P}\) in advance. Furthermore, when all the processes p in set P are treated independently, we neglect how the many processes involved in generally intelligent behavior might interact with each other.

Each newly discovered process in \(\tilde{P}\) (though not previously in P) will require its own research and development, and we will need to determine how this new process interacts with other related processes. This entails that the evolution of \(\Delta \) will be complex and the slowdown effect will be enhanced. The Slowdown Hypothesis ultimately maintains that the rate of technological progress is not accelerating but rather slowing down and even starting the decline. In addition, the Slowdown Hypothesis, supported by macroevolutionary trends and the complexities associated with an evolving \(\Delta \) and an indeterminate \(\tilde{P}\), offers a refutation of the argumentative plank on which singularitarians typically lean in their acceptance of the second horn: the Argument from Acceleration.

4 Third Horn

If the singularitarian wishes to evade the brunt of the attack that may be anticipated with the second horn of the trilemma, she could concede the unsoundness of the Argument from Acceleration and instead defend the metaphorical sense of the technological singularity by alternative argumentative means. The Argument from Acceleration, however, enjoys such wide currency among members of the singularitarian camp that it is difficult to imagine what an alternative though equally influential argument in favor of singularitarianism might look like.

Another possibility would be to hold fast to either the first horn (literal) or the second horn (metaphorical) of the trilemma and maintain that there are multiple possible (literal or metaphorical) senses of the singularity. For instance, Hanson (2008) has attempted to widen the metaphorical sense of the singularity and permit “singularity” to mean an overwhelming departure from prior trends, with uneven and dizzyingly rapid change thereafter. Hanson then proceeds to enumerate a list of singularities (viz. the Big Bang, the emergence of brain-like structures, the Agricultural Revolution, and the Industrial Revolution) in the history of the universe and predict that the technological singularity may constitute the next singularity.

This move looks suspiciously like Humpty Dumptyism. Humpty Dumpty is a literary character in Lewis Carroll’s Through the Looking-glass. In a particular exchange, Humpty Dumpty tells Alice “There’s glory for you,” to which Alice confesses that she does not understand what Humpty Dumpty means by “glory.” Humpty Dumpty replies that by “There’s glory for you” is meant “There’s a nice knockdown argument for you,” simply in virtue of his intending “glory” to mean “a nice knockdown argument.” The worry here is that singularitarians like Hanson who inflect the singularity with new meanings resemble, all too uncomfortably, Humpty Dumpty when he tells Alice that he can make a word mean whatever he intends it to mean. Might the more straightforward employment of alternative terms and phrases such as “canonical milestones” (to take a leaf out of Modis’s book), “an overwhelming departure from prior trends” (to quote verbatim the definiens of “singularity” from Hanson (2008)), or even “paradigm shift” not suffice?

If neither the literal nor metaphorical senses of “singularity” give rise to palatable consequences for the singularitarian, then the singularitarian will be left with only one final option: the third horn of the trilemma. There is a school of thought concerning the technological singularity that has been christened as “Baloney” (Baez, 2011). According to the school of Baloney, all talk about the technological singularity is tripe or nonsense. Nonetheless, it by no means follows from this skepticism about the technological singularity that we could not end up with machines exhibiting superhuman levels of intelligence. Furthermore, even without the technological singularity, we could still quite consistently makes plans about the impact that AI and advanced technologies will have on our society (Walsh, 2017).

The third horn of the trilemma is where Ken MacLeod’s characterization of the technological singularity might be thought to reside. According to one of his characters in The Cassini Division, the technological singularity may be described as a “rapture of the nerds” (MacLeod, 1999). While wrapped up in an understandable yearning for transcendence (religious or transcendental), singularitarianism relies on a nonsensical concept (viz. the technological singularity) and betrays a naive understanding of what intelligence might be about, especially relative to neuroscience (Horgan, 2008). The demarcation problem concerns the issue of how we distinguish between science and non-science (including pseudoscience) (Resnik, 2000). According to one famous solution to the demarcation problem, any theory or hypothesis is scientific only if it is possible in principle to establish that it is false (Popper, 1962). Singularitarians who wish to embrace the third horn, perhaps on the grounds of their yearning for transcendence, must concede that S1–S2 are unfalsifiable, since they contain at least one nonsensical notion (viz. the technological singularity), and therefore pseudoscientific.

In the cold light of day, the third horn implies that we ought to do away with or get rid of the concept of the technological singularity altogether. In other words, we have good philosophical grounds to defend an eliminativism about the concept of the technological singularity.Footnote 21 AI researchers, engineers, and scientists should stop wasting time indulging in pseudoscientific and escapist fantasies or searching for the right definition of the concept of the technological singularity, since all this would count as a needless distraction. Instead, we should be focusing more on addressing real-world problems and identifying solutions for these problems (Horgan, 2008). An eliminativist about the concept of the technological singularity will identify this concept as a conceptual straitjacket that we can and ought to dispense with, since it fails to reflect the underlying nature of reality and might even distract us from important intellectual work.

Singularitarians will not ordinarily want to embrace the third horn, since it would render meaningless the very claims (S1–S2) on which their position is grounded (see Section 1). However, whether the concept of the technological singularity is meaningful (literally or metaphorically) or meaningless is ultimately independent of the beliefs and preferences of singularitarianism. In conclusion, my trilemma is effectively a challenge to singularitarianism concerning the obscurity of its central concept of the technological singularity. According to its three horns, the technological singularity is either a mere mathematical artefact (if taken in the literal sense), underdetermined by the data and probably downright false (if taken in the metaphorical sense), or a pseudoscientific notion (if taken to be nonsense). All other things being equal, if the trilemma holds, then eliminativism about the technological singularity will count as an appropriate attitude.