1 Intelligent Machinery, a Heretical Theory

Alan Turing (1912–1954) made two BBC radio broadcasts in 1951 that contained some of his strongest statements about the possibility of intelligent machines and their consequences for humanity. One of the broadcasts, first aired on May 15, was part of a series Automatic Calculating Machines that featured other British computer pioneers (Jones, 2004). The series may have limited Turing’s choice of title, “Can Digital Computers Think?” (Turing, 2004c [1951]). For the other broadcast, however, Turing gave the title “Intelligent Machinery, a Heretical Theory” (Turing, 2004d [c. 1951]). I will quote from the climax of Turing’s 1951 public lectures, starting with the former:

If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. A similar danger and humiliation threatens us from the possibility that we might be superseded by the pig or the rat. This is a theoretical possibility which is hardly controversial, but we have lived with pigs and rats for so long without their intelligence much increasing, that we no longer trouble ourselves about this possibility. We feel that if it is to happen at all it will not be for several million years to come. But this new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety. (Turing, 2004c [1951], pp. 485–486)

Turing is concerned with our emotional response to a question we might call ontological: whether there could be some species, including machines considered as a species, that would eventually surpass the human race in intelligence. He reminds us that humans have held a dominant position among the species, but this position, he points out, is not necessarily permanent. He alludes to the pig and the rat, his carefully chosen examples, to point out, somewhat ironically, our sense of superiority over other species. He then shifts the discussion to the timescale of possible events, since the evolution of machines would not be subject to the same timescale. He warns that the threat posed to us from intelligent machines is a “remote but not astronomically remote” possibility.

In the climax of his other 1951 BBC broadcast, Turing’s focus shifts:

Let us now assume, for the sake of argument, that these [intelligent] machines are a genuine possibility, and look at the consequences of constructing them. To do so would of course meet with great opposition, unless we have advanced greatly in religious toleration from the days of Galileo. There would be great opposition from the intellectuals who were afraid of being put out of a job. It is probable though that the intellectuals would be mistaken about this. There would be plenty to do, trying to understand what the machines were trying to say, i.e. in trying to keep one’s intelligence up to the standard set by the machines, for it seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s ‘Erewhon.’ (Turing, 2004d, p. 475).

Here Turing is concerned with our emotional response to a question we might call ethical: whether the construction of such intelligent machines would be a sensible project. It is worth noting that Turing’s focus was on a particular figure in society, the intellectual, who would be afraid of losing his or her job. He then directed the subject to the projection that once intelligent machinery is achieved, it would not take long for machines to take control, and explicitly cited Samuel Butler’s Victorian novel Erewhon (Butler, 1872) as an inspiration.

These strong statements about the future of machines in society have long been intriguing. This article asks how they can best be interpreted to advance our understanding of Turing’s philosophy. Was Turing just joking, or was he trying to make a serious point?

2 Argument Sketch

Biographers, historians, philosophers, scientists, and novelists have answered this question differently. I will argue that Turing’s ironic and humorous attitude has led most interpreters to either caricature (as Frankenstein-like) or downplay (as polite humor) his vision of the future of intelligent machines in society, while a close and sustained examination of his arguments seems to be lacking. In particular, I will identify three images of Turing drawn from contemporaries and disseminated in the secondary literature (\(\mathrm{\S }\)3), and explore the first, which portrays Turing as a nonconformist, utopian, and radically progressive thinker reminiscent of Percy B. Shelley.

I will propose a historiographical and philosophical interpretation of Turing’s ironic statements as satire, or irony with a point. I will suggest that Turing embraced humor and irony as a personal philosophical stance and used it as a method of self-expression (\(\mathrm{\S }\)4), making arguments through the formulation of surprising contrasts intended to unsettle the assumptions of his interlocutors. Further, it can be shown that his move in 1951 was not a thoughtless, isolated step. Instead, the same argument is consistently present in Turing’s primary sources every year from 1947 to 1951. I will emphasize what I call its ontological and ethical components, which appear, for example, in Turing’s formulation of two objections to intelligent machines in 1948 and 1950. These objections articulate what he called — but denied being guilty of — his “Promethean irreverence” (\(\mathrm{\S }\)5). Turing had seen models of satire in the works of Charles Dickens and Samuel Butler, whose influences can be traced in Turing’s peculiar conception of an intelligent machine (\(\mathrm{\S }\)6). For a variety of reasons, I will argue that Turing’s project can hardly be associated with that of Mary Shelley’s character, Dr. Frankenstein (\(\mathrm{\S }\)7). I will reconstruct Turing’s overarching argument, here interpreted as a utopian satire whose point is manifested in his conception of an intelligent machine (\(\mathrm{\S }\)8). Concluding remarks are given at the end, drawing a parallel between Turing’s intelligent machine utopia and Percy B. Shelley’s masterpiece, Prometheus Unbound (\(\mathrm{\S }\)9).

In addition to helping to clarify a puzzling aspect of Turing’s philosophy, this article draws attention to an important aspect of the future of intelligent machines in society — namely, the impact of intelligent machines on the ability of intellectuals to exercise power. This is a non-obvious connection that Turing keenly foresaw. Distrustful of the attitudes of some intellectuals in positions of power, Turing hoped that his truly intelligent, ever-learning machines would expose the various forms of chauvinism he saw in their views of society and nature. Such intellectuals would eventually be surpassed by the machines he envisioned and transformed into ordinary people, as work once considered “intellectual” would be transformed into non-intellectual, “mechanical” work. I study Turing’s irony in its historical context and follow the internal logic of his arguments to their limit. I will suggest that he genuinely believed that his ever-learning child machines, educated by individuals (not by large corporations or nation-states) to grow their intelligence out of their own experiences, would help distribute power in society.

3 Three Images of Turing

Starting with Turing’s contemporaries, I will move through the commentary of Turing’s biographer Andrew Hodges to more recent commentators. Following this chronological approach, I will examine three different images of Turing that can be identified in the secondary literature in relation to Turing’s use of irony (\(\mathrm{\S \S \S }\)3.1, 3.2, 3.3). These images partly overlap and partly diverge in subtle ways: (i) a utopian, radically progressive thinker, suggesting a scientific version of the English Romantic poet Percy Bysshe Shelley (1792–1822); (ii) an infantilized, politely humorous, and muted genius; and (iii) an irresponsible scientist reminiscent of Mary Shelley’s dystopian character, Dr. Frankenstein (2012 [1818]). I will show that in his extraordinarily rich biography of Turing, Hodges (1983) simultaneously promotes all three images, which will testify to the fact that he offered a highly multifaceted view of Turing’s character based on the testimony of Turing’s contemporaries whom he interviewed. However, while two of these images have been widely disseminated, one of them, the first image of Turing, that of a nonconformist, utopian, and radically progressive thinker reminiscent of a “scientific” Percy B. Shelley, has remained largely underexplored since Hodges.Footnote 1 Emphasizing this image, I will follow the internal logic of Turing’s own arguments to their limit.

3.1 First Image: a Nonconformist, a Utopian, a Sort of Scientific Shelley

In a letter to Mrs. Sara Turing on December 18, 1954, a few months after her son’s death in June, Geoffrey Jefferson (1886–1961), Professor of Neurosurgery at the University of Manchester during Turing’s time as Reader in Mathematics there (1948–1954), offered this rich picture of Turing:

He was so unversed in worldly ways, so childlike it seems to me, so unconventional, so non-conformist to the general pattern … so very absentminded. His genius flared because he had never quite grown up. He was, I suppose, a sort of scientific Shelley.Footnote 2

It is worth noting that Jefferson was born and raised in late Victorian England, just when Percy Shelley’s posthumous reputation was disputed by different political forces, notably in 1886 at the foundation of the Shelley Society (Smith, 1945, p. 268), in the year Jefferson was born. Jefferson may not have understood the philosophical implications of linking Turing to the Romantic poet, for he interpreted Turing as a reductionist of mind and imagination.Footnote 3 Turing himself did not feel understood by Jefferson. According to (Hodges, 1983, p. 439), Turing “would refer to Jefferson as an ‘old bumbler’ because he never grasped the machine model of the mind.” It is clear, however, that Jefferson’s image of Turing as a kind of scientific Shelley operates on the level of Shelley’s utopianism and radical progressivism (Scrivener, 2016 [1982]).

Recognized by the Lister Prize of the Royal College of Surgeons of England, Jefferson devoted his Lister Oration on June 9, 1949, in London to a critique of the possibility of thinking machines. His lecture was entitled “The Mind of Mechanical Man” (1949). Jefferson had joined the public discussion on mind and machine with fairly clear political concerns. Against the view he attributed to “the physicists and mathematicians,” invaders of the field of brain-mind relations that belonged to physicians (p. 1105), he declared:

[T]he concept of thinking like machines lends itself to certain political dogmas inimical to man’s happiness [and] erodes religious beliefs that have been mainstays of social conduct. (Jefferson 1949, p. 1107)

Jefferson’s attitude can be contextualized by the Cold War climate.Footnote 4 Observing the context, we can interpret Jefferson’s description of Turing without resorting to a psychologization of Turing. For someone with Jefferson’s conservative views and values (Schurr, 1997), not conforming “to the general pattern” could hardly be interpreted as something other than “very absentminded.” This tension is largely intrinsic to Jefferson’s position as a committed historical actor who perceived Turing’s views as dangerous. It can be seen in Jefferson’s simultaneous use of the word “non-conformist.”Footnote 5

Being twenty-four years older than Turing, Jefferson tried to give Turing advice, perhaps a nudge. Jefferson’s opposition to Turing’s views continued until their joint appearance on a BBC broadcast on January 15, 1952, which marked the end of Turing’s public defense of his intelligent machine project in the media. Turing commented on the broadcast in a letter to a close friend,Footnote 6 writing that “J. [Jefferson] certainly was rather disappointing though.” Then, in a puzzling juxtaposition, he added: “I’m rather afraid that the following syllogism may be used by some in the future[:] Turing believes machines think. Turing lies with men. Therefore machines do not think.”

Hodges (1983), writing in the late 1970s, a few decades after Jefferson’s letter of condolence, but still during the Cold War,Footnote 7 partly followed and partly departed from Jefferson in his construction of Turing’s character. He explicitly acknowledged: “Jefferson certainly found an apt description of Alan, as ‘a sort of scientific Shelley’” (p. 439). Here, too, Hodges seems to be largely in the mold of Jefferson, albeit with allusions to other characters:

Money, commerce, and competition had played no obvious part in the central developments in which Alan Turing was enmeshed, developments which had allowed him in many ways to remain the idealistic undergraduate. His retention of a primitive liberalism, his ‘championing of the underdog’ …, like his obsession with the absolutely basic, had the flavour of more Utopian thinkers than Mill. Tolstoy was a figure that he brought to one person’s mind, and Claude Shannon had perceived him as like Nietzsche, ‘beyond Good and Evil.’ (Hodges, 1983, p. 308)

Hodges further refined his portrayal of Turing by comparing him to the English socialist Edward Carpenter (1844–1929), one of the first socialists in BritainFootnote 8:

But perhaps closer in spirit than either of these, and certainly closer to home, was another late nineteenth-century figure who had lurked more in the back room of political consciousness. That awkward figure Edward Carpenter, while sharing much in common with each of these European thinkers, had criticised Tolstoy for a restrictive attitude to sex and Nietzsche for overbearing arrogance. And in Carpenter, at a time when socialism was supposed to be about better organisation, lay the example of an English socialist not interested in organisation but in science, sex and simplicity — and with bringing these into mutual harmony. (Hodges, 1983, p. 308)

Carpenter is a reference for Hodges to position Turing as more progressive than Tolstoy and less pretentious than Nietzsche. Both Turing and Carpenter lived by first principles, and Carpenter’s views, Hodges notes (p. 310), played a part in “the more innocent days” of the British Labour Party: “His naively lucid questioning of what life was for, and of what socialism was going to make it.” At times, Hodges may give the impression that Turing was naive.

3.2 Second Image: a Genius of Childlike Manners, a Muted Figure, a Gentle Mocker

Jefferson’s association of Turing’s genius with “childlike” ways was emphasized by Hodges in countless instances, for example in the above quote, “[it] allowed him … to remain the idealistic undergraduate”; or, here, where he almost suggests that Turing was a rebel without a cause, “for Alan the real point lay not in political commitments but in the resolve to question authority” (p. 72). In places, Hodges seems to agree with Jefferson’s tendency to infantilize Turing, although Jefferson was acting in the role of a committed historical agent, while Hodges was playing a somewhat more detached role as Turing’s post-mortem biographer. Following Jefferson, Hodges quoted from J. A. Symonds’ Shelley (1884) to compare Turing’s and Percy Shelley’s attributes:

Apart from the more obvious similarities, Shelley also lived in a mess, ‘chaos on chaos heaped of chemical apparatus, books, electrical machines, unfinished manuscripts, and furniture worn into holes by acids,’ and Shelley’s voice too was ‘excruciating; it was intolerably shrill, harsh and discordant.’ (Hodges, 1983, p. 439)

However, while Jefferson considered Turing’s views so dangerous that he felt compelled to respond forcefully in public, Hodges went on to write:

Alike they were at the centre of life; alike at the margins of respectable society. But Shelley stormed out, while Alan continued to push his way through the treacly banality of middle-class Britain, his Shelley-like qualities muted by the grin-and-bear-it English sense of humour, and filtered through the prosaic conventions of institutional science. (Hodges, 1983, p. 439)

Hodges suggests that Turing’s humor was polite and filtered through the conventions of his institutional environment. And yet his strong views appeared in some of Britain’s most prominent media.

The depth of Turing’s voice on society was not yet clear to Hodges, writing in 1983. Turing never expanded his inner circle very much, and this is a clear point that Hodges makes. However, this does not mean that Turing's radical views were silenced. Indeed, Turing envisioned machines passing one of his imitation tests around the 2050s (Turing et al., 2004 [1952], p. 495). Meanwhile, the displacement of humans by machines has already worried neoliberal economists, who have called it the “Turing trap” (Brynjolfsson, 2022).Footnote 9 Turing’s Shelleyan qualities were not muted. However, while Hodges' foundational work itself was key to making Turing known and even a global icon, it also supported depictions of Turing that might suggest an infantilized genius, such as Derek Jacobi's portrayal in Breaking the Code, a 1996 BBC television movie based on the homonymous 1986 play written with Hodges’ support.Footnote 10

Jack Copeland (2012) chose “humour” as the first of three words, followed by “courage” and “isolation,” to sum up Turing: “he had an impish, irreverent, and infectious sense of humour” (p. 1). Turing’s wit fits in Copeland’s biography with other qualities, “patriotic,” “unconventional,” and “genius,” in the story of an unexpected protagonist of the Allied victory in the information war of World War II in the Atlantic theater. Copeland’s foundational work on Turing, biographical, scientific, and philosophical, sheds further light on Turing’s contributions to the Allied war effort, to early modern computing and machine intelligence, as some of his contributions had long been concealed from the public record and the technical literature. But Copeland’s focus on a reparative narrative may distract from Turing’s public use of humor, which I will argue consisted of a sophisticated defensive tactic. I will draw attention to how Turing used his sense of humor to protect his eccentric and rebellious way of criticizing habits and customs, and social and institutional structures, particularly what Agar (2003) has called the postwar British “government machine.”

At a time when artificial intelligence (AI) was not as much in the public discussion as it is today, the nature of Turing’s 1951 remarks on the future of machines in society was given very short shrift in Copeland’s major work, The Essential Turing:

Turing ends [his 1951 BBC broadcast] ‘Intelligent Machinery, A Heretical Theory’ with a vision of the future, now hackneyed, in which intelligent computers ‘outstrip our feeble powers’ and ‘take control’. There is more of the same in [Turing’s other BBC broadcast delivered in 1951, ‘Can Digital Computers Think?’]. No doubt this is comic-strip stuff. (Copeland, 2004, p. 470)

Writing in the early 2000s, when machine learning was still on the verge of becoming a dominant paradigm in AI, Copeland seems to have suggested reading Turing’s 1951 remarks as childish comedy.

More recently, in the wake of the recent AI resurgence, Diane Proudfoot has also commented on Turing’s ironic 1951 remarks. Proudfoot focused on the alleged stupidity of Turing’s interlocutors. She generalized a notion of “AI panic” to connect events from Butler’s nineteenth century to Turing’s twentieth century and contemporary events related to AI. While characterizing Turing’s stance as “gentle mockery,” Proudfoot acknowledged that it had a “serious edge”:

Turing (following Butler) poked fun at the fear of out-of-control AI. […His] response to AI panic was gentle mockery. All the same, there was a serious edge to his humor. If runaway AI comes, he said, “we should, as a species, feel greatly humbled.” He seemed almost to welcome the possibility of this humiliating lesson for the human race. (Proudfoot, 2015)

One question not asked is why would Turing have welcomed a humiliating lesson for humanity? Further, was all of humanity really the target of his irony?

Overall, by emphasizing the interpretation of Turing’s irony as polite humor, or by neglecting either the presence of a point in Turing’s irony or the specific class or group that Turing was targeting, Turing scholars may have left the way open for more distant commentators to speculate. Turing has been widely read in the same charged way that he was read in a Cold War climate, and, as will be shown below, often as a dystopian agent.

3.3 Third Image: an Irresponsible Scientist, a Mechanical Necromancer, a Frankenstein

Cold War resonances are particularly evident in the attitude of Wolfe Mays (1912–2005), a philosopher at the University of Manchester at the time. Like Jefferson, Mays attended the 1949 Manchester seminars that preceded Turing’s formulation of his test (Gonçalves, 2022a). In the summer of 1950, Mays later reported (2001), he was asked by Ryle to write a reply to Turing’s paper to appear in the same October 1950 issue of Mind, but Ryle would have rejected his paper for being “too polemical.” Later published (1952), Mays’ paper offered a strong critique of Turing’s paper. He rejected Turing’s proposed association of the words “machine” and “thinking” and offered instead the words “robot” and “artifice”:

[I]t may be necessary to introduce a new label to indicate a device which simulates overt human activities without at the same time duplicating our internal behaviour. The word is ready to hand and was coined by Karel Capek, we call them ‘robots’ […]. In this connection it might be a good thing to drop the word ‘machine,’ with its emotional overtones of clanging metal, and use some such neutral word as ‘artifice.’Footnote 11 (Mays, 1952, p. 150)

Coined in the English translation of Capek’s play R.U.R. (Rossum’s Universal Robots) (2019 [1923]), the word “robot” comes from the Czech robotnik (forced laborer) and was associated with determinism and national-state tyranny. Mays went on to complain about “[t]he paradoxical Frankenstein nature of the machine-mind” (1952, p. 150), implying to Turing the designation of a “mechanical necromancer” (p. 153).

In line with Mays’ connotations on Turing as Dr. Frankenstein — and it is worth noting at this point Hodges’ simultaneous promotion of the three images under analysis (\(\mathrm{\S \S \S }\)3.1, 3.2, 3.3)— Hodges characterizes Turing’s views:

His ruthless, raw view of science was something that Lyn Newman again captured with an image of him as ‘the Alchemist’ of the seventeenth century or before – recalling a time when science was not shrouded in titles and patronage and respectability, but was nakedly dangerous. There was a Shelley in him, but there was also a Frankenstein — the proud irresponsibility of pure science, concentrated in a single person. (Hodges, 1983, p. 521)

Hodges evokes Lyn Irvine’s portrayal (2012, [1959]) of Turing.Footnote 12 However, according to her, Turing “certainly had less of the eighteenth and nineteenth centuries in him than most of his contemporaries.” And yet Hodges links Turing to Frankenstein, whose figure is surely more representative of scientists since the eighteenth-century Age of Enlightenment. Further, Hodges skipped over Irvine's clear note: ‘His mother and his housemaster, of one mind about him throughout, saved Alan from what threatened to be a career of scientific pranks.' Nor does Hodges seem to have fully appreciated Irvine's fabulous account of the tenderness in Turing's eyes:

... his eyes ..., blue to the brightness and richness of stained glass. They sometimes passed unnoticed at first; he had a way of keeping them to himself, and there was also so much that was curious and interesting about his appearance to distract the attention. But once he had looked directly and earnestly at his companion, in the confidence of friendly talk, his eyes could never again be missed. Such candour and comprehension looked from them, something so civilized that one hardly dared to breathe. Being so far beyond words and acts, that glance seemed also beyond humanity. (Irvine, 2012 [1959], p. xxi)

Whether by what he left out or by what he emphasized, Hodges may have further stimulated the image of Turing as a Frankenstein that Mays had pushed through. Although Hodges supported Jefferson’s image of Turing as a scientific Shelley, observing Turing’s irreverence for patronage and institutional power, he later departed to some extent from Jefferson, who had used the word “non-conformist” (see note 5 above) rather than ‘irresponsible.’ Hodges has created a multifaceted profile of Turing, ‘the enigma,’ based on his extensive research and the accounts of many of Turing’s contemporaries. One of them, Turing’s contemporary and close colleague Donald Michie (1984), acknowledges this in his review of Hodges: ‘A single scene may yield many views — as many as there are observers.’  Michie also noted that, although Hodges is a mathematician, his skill in writing his Turing biography “is that of the novelist and the dramatist.”

Following along with Hodges’ work, the image of Turing as Dr. Frankenstein has reappeared in non-expert readings of Turing, both in science and fiction. For example, Hayes and Ford (1995) presented an influential critique of Turing’s imitation test. Alluding to Frankenstein, they urged their colleagues to abandon the goal of creating an “artificial human” and thereby free themselves from “Turing’s ghost.” Their commentary makes it sound as if Turing, unlike subsequent generations of AI scientists, was irresponsible.

Meanwhile, in fiction, Ian McEwan’s recent novel Machines Like Me (2019) portrays Turing living beyond his 42nd year to lead the development of embodied artificial humans with artificial skin and a host of other features that make them resemble real humans. It makes it seem as if Turing’s ambition was the synthesis of a human-like artificial being. Again, this is reminiscent of Mary Shelley’s novel, which explores Frankenstein’s drive to control nature and circumvent its processes in the laboratory.

Overall, descriptions of Turing as a dystopian actor, a mechanical necromancer like Frankenstein, seem intriguing against the backdrop of Turing’s 1951 statements. I will address them in light of Turing’s conception of an intelligent machine (\(\mathrm{\S }\)7).

4 The Ironic Turing

Turing’s sense of humor is highlighted in the accounts of his friends (Turing, 2012 [1959]).Footnote 13 But what kind of humor did Turing exhibit and what can be drawn from it?

4.1 “Probably He Did Not Mean This to be Taken Too Seriously…”

Attention to Turing’s use of humor, almost as a means of distracting from the seriousness of his views, goes back to his own time. A paradigmatic example can be found in the wake of the June 1949 polemic between Turing and Jefferson over the possibility of intelligent machines. The Times had highlighted Jefferson’s strong arguments, claiming a myriad of things that machines would never be able to do, most notably ‘write a sonnet or compose a concerto because of thoughts and emotions felt.'Footnote 14 The next day, the reporter managed to get a reply from Turing, speaking for the Computing Machine Laboratory at Manchester University, of which he had been appointed deputy director by Max Newman:

Mr. Turing said yesterday: “This is only a foretaste of what is to come, and only the shadow of what is going to be … I do not see why it [the machine] should not enter any one of the fields normally covered by the human intellect, and eventually compete on equal terms”. “I do not think you can even draw the line about sonnets, though the comparison is perhaps a little bit unfair because a sonnet written by a machine will be better appreciated by another machine.”

Mr. Turing added that the University was really interested in the investigation of the possibilities of machines for their own sake.Footnote 15

This extraordinary reply produced many reactions sent to the newspaper’s correspondence.

For example, Hodges found that Dom Illtyd Trethowan, a Benedictine intellectual from Downside Abbey, was eager to respond to Turing’s ironic words about sonnet-writing machines that appeared in The Times. In a letter to the newspaper, the monk addressed the “responsible scientists” whom he urged “to be quick to dissociate themselves” from Turing’s research program.Footnote 16 (Again, it is worth noting that Hodges’ reference to Turing’s “irresponsibility” echoed a committed historical actor.) Trethowan went on to warn that “[e]ven our dialectical materialists would feel necessitated to guard themselves, like Butler’s Erewhonians, against the possible hostility of the machines.” He noted the institutional context of the computing projects in the UK, and encouraged those who saw human beings as “free” persons to ask themselves “how far Mr. Turing’s opinions are shared, or may come to be shared, by the rulers of our country.”

Two weeks after Turing’s words appeared on the media, the British Medical Journal published Jefferson’s Lister Oration and an accompanying editorial with this rejoinder:

Mr. A. W. [sic] Turing, who is one of the mathematicians in charge of the Manchester “mechanical brain,” said in an interview with The Times (June 11) that he did not exclude the possibility that a machine might produce a sonnet, though it might require another machine to appreciate it. Probably he did not mean this to be taken too seriously … (BMJ, 1949, p. 1129)

The same question arises again: how serious was Turing?

Max Newman, the sponsor of Turing’s academic position as a Reader in the Mathematics Department at the University of Manchester, regretted the June 1949 polemic and tried to alleviate the social pressure with a clarifying note that appeared in the same issue of the BMJ (Newman, 1949). He emphasized that digital computing machines could handle a variety of computing problems and observed:

The first question that will have to be asked is not ‘Can all kinds of thought, logical, poetical, reflective, be imitated by machines?’ but ‘Can anything that can be called “thought” be so imitated and, if so, how much?’ (Newman, 1949, p. 1133, his emphasis)

Following this moderate note, Newman remarked: “The most promising line here will be to work within mathematics itself.” According to Newman’s wife Lyn Irvine (1949), his letter was an attempt to “clear things up.” But Turing’s sonnet-writing machine remark suggests that he drew no line between Newman’s two questions, universal and existential. There must have been something of value to Turing that he felt was at stake. He was undeterred by the public reaction to his views.

4.2 Humor with Intellectual Integrity

An example of Turing’s willingness to take his progressivism a long way is his attitude toward the police when he was charged with sexual offenses. In January 1952, a case was formed against him under “Gross Indecency contrary to Section 11 of the Criminal Law Amendment Act 1885.” According to Hodges (1983), the detectives reported: “He was a real convert … he really believed he was doing the right thing” (p. 457).

Overall, an informative note on the nature of Turing’s humor was given by Turing’s fellow mathematician and contemporary at King’s College, Denis Williams:

In intellectual, as in other matters, it was essential to him that everything should ring true. […It] seems to me precisely this complete intellectual integrity, which, combined with his other gifts, made it reasonable to expect that he would produce results of fundamental importance in his own field. Alan had a delightful sense of humour. He enjoyed elaborating fantastic projects, such as a scheme for faking prehistoric cave paintings, in mock-serious detail, or bringing an over-serious discussion down to earth with a quick colloquial turn of phrase. With him jest and earnestness were often closely intermingled. (Turing, 2012 [1959], p. 91)

Now, if humor and intellectual integrity went hand in hand for Turing, how can we make sense of his ironic statements?

4.3 Irony as a Method

Against great, as Turing put it, “intellectual” and “emotional” opposition, he made quite extensive use of irony. Sensing that he was not being properly heard, Turing found his way to respond. John V. Price’s interpretation of David Hume’s irony can be helpful to our understanding of Turing’s:

When a man was under the intellectual and cultural pressure which Hume experienced he could not respond easily by denunciations, by shouting, or by threats. As a civilized man, Hume would not have responded that way under any circumstance. His method of dealing with those who would persecute him or ostracize him simply because of his religious or philosophical or moral opinions was subtle and effective. Irony gave him a method of operating in a world that found his ideas both strange and shocking: strange because most people were simply unable to handle them, shocking because his scepticism dared to attack the citadel of religion. New ways of thinking about man’s place in nature, especially if they do not reassure one’s blind faith, are often difficult … to tolerate. Irony could at least create artificial tolerance. (Price, pp. 4–5)

This proves to be very enlightening when applied to Turing. His “new ways of thinking about man’s place in nature” were “often difficult to tolerate” indeed. This parallel suggests that Turing’s words should neither be understood literally nor dismissed as plain mockery. Rather, his irony can be understood as a clever form of communication in an environment that was not receptive to him. It can be noted that Turing subtly used irony to make a point — e.g., in his comment to The Times about sonnet-writing machines — by imitating people’s language in ways that were aimed at exposing their vices or stupidity. This is different from mere parody, gentle or otherwise, or imitation for (philosophically empty) comic effect. I suggest that Turing’s irony can be best understood as satire, i.e., irony with a point. Under this interpretation, we can now examine the point that Turing was trying to make.

5 Turing’s Promethean Irreverence

Turing’s move in 1951 was not a thoughtless, isolated step. Rather, the same argument is consistently present in Turing’s communications and writings every year from 1947 to 1951 in connection with his conception of an intelligent machine. (We have seen his reply to The Times in June 1949, and will soon see his lecture to the London Mathematical Society in February 1947.) I will now show that in 1948 and 1950, through his systematic use of irony, he consistently articulated what he called — but denied being guilty of — his “Promethean irreverence,” consisting of what can be understood as an ontological and ethical argument against two objections to the possibility of intelligent machines.

5.1 The 1948 Objections (a) and (b) to Intelligent Machines

Turing wrote his last NPL report on “Intelligent Machinery” in 1948 while on leave from the NPL to which he would never return. I want to emphasize that the first two of the five objections articulated there have the same structure as his 1951 BBC radio broadcasts — namely, they address what can be seen as an ontological and an ethical question, respectively:

(a) An unwillingness to admit the possibility that mankind can have any rivals in intellectual power. This occurs as much amongst intellectual people as amongst others: they have more to lose. Those who admit the possibility all agree that its realization would be very disagreeable. The same situation arises in connection with the possibility of our being superseded by some other animal species. This is almost as disagreeable and its theoretical possibility is indisputable.

(b) A religious belief that any attempt to construct such [intelligent] machines is a sort of Promethean irreverence. (Turing, 2004b, [1948], p. 410)

While objection (a) deals with the ontological question of whether intelligent machines could ever exist, objection (b) deals with the ethical question of whether one should ever build such machines. In objection (a), Turing refers to a kind of conflict of interest that he saw in the position of “intellectual people.” In objection (b), he responds to the charge that any attempt to build an electronic brain would be “a sort of Promethean irreverence.” Turing answered both objections:

The objections (a) and (b), being purely emotional, do not really need to be refuted. If one feels it necessary to refute them there is little to be said that could hope to prevail, though the actual production of the machines would probably have some effect. (Turing, 2004b, [1948], p. 410)

This suggests that Turing relied on the actual existence of intelligent machines as the best approach to addressing objections (a) and (b). Note also that objection (a), Turing’s first formulation of an objection to intelligent machines, marks the moment when he first expressed the core idea of “The Book of Machines” in Butler’s Erewhon, though without citing it on that occasion. The Butlerian idea in objection (a) is, essentially, the view of machines as a species (in Butler’s terms, “the mechanical kingdom”). Through evolution, they could in principle rival and surpass humans in intellectual power. This was a corollary of Butler’s critique of Charles Darwin, as we will soon see (\(\mathrm{\S }\)6). This idea would recur in Turing’s work until 1951, as we have seen from his statements in the two BBC radio broadcasts (\(\mathrm{\S }\)1).

5.2 The 1950 Reformulation as Objections (1) and (2)

In (1950), Turing changed the order of the two “purely emotional” objections, (a) and (b), and named them “the theological objection” and “the ‘heads in the sand’ objection”Footnote 17:

(1) The Theological Objection. Thinking is a function of man’s immortal soul. God has given an immortal soul to every man and woman, but not to any other animal or to machines. Hence no animal or machine can think.

… In attempting to construct such [intelligent] machines we should not be irreverently usurping His [God’s] power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates.

(2) The ‘Heads in the Sand’ Objection. “The consequences of machines thinking would be too dreadful. Let us hope and believe that they cannot do so.” … We like to believe that Man is in some subtle way superior to the rest of creation. It is best if he can be shown to be necessarily superior, for then there is no danger of him losing his commanding position. The popularity of the theological argument is clearly connected with this feeling. It is likely to be quite strong in intellectual people, since they value the power of thinking more highly than others, and are more inclined to base their belief in the superiority of Man on this power. (Turing, 1950, pp. 443–444, his emphasis)

Once again, Turing linked the two objections by referring to the first in his discussion of the second. In objection (1), Turing referred to the image of usurping God’s power to create souls, and thus indirectly to the myth of Prometheus, and again in connection with religion, now satirized. He suggests that the ethical claim against the project is based on unfounded ontological assumptions about distinctions between humans and other intelligent animals or machines. In objection (2), Turing again focuses on “intellectual people” and refers to “Man” and his unwillingness to lose “his commanding position.”

Turing finished his 1950 paper hoping that machines would be intellectually competitive “with men in all purely intellectual fields” and called for action to make it happen, addressing both questions of ontology and ethics at once. Turing’s argument in response to these questions will become clearer through his conception of an intelligent machine (\(\mathrm{\S }\)6) and its related social implications (\(\mathrm{\S }\)8).

6 Satiric Novels and Turing’s Conception of an Intelligent Machine

Turing had seen engaging models of satire since his youth, as his mother Mrs. Sara Turing reported (2012 [1959], p. 108): “In his late teens he read a certain amount of fiction.” Specifically, she noted: “He had a particular fondness for The Pickwick Papers … and Samuel Butler’s Erewhon. This last possibly set him to think about the construction of an actual intelligent machine.” Charles Dickens’ The Pickwick Papers (1972 [1836]) and Samuel Butler’s Erewhon (1872) can be quite informative not only about Turing’s satirical style but also about his conception of an intelligent machine. This will further support the claim that Turing used irony with a point, that satire is deeply integrated into his science. “With him,” as Denis Williams observed, “jest and earnestness were often closely intermingled.”

6.1 Turing’s Dickensian Satire of the Universal Machine as an Image of the Mind

Charles Dickens (1812–1870) was an English writer and critic of Victorian society. Dickens made extensive use of the word “mechanical,” satirizing unreflective behavior (e.g.): “Mr. Winkle, being half asleep, obeyed the command mechanically, opened the door a little, and peeped out”; or “‘I’m a-comin’, sir,’ replied Mr. Weller, mechanically following his master” (Dickens, 1972 [1836]). Jon Agar (2003) notes Dickens’ specific innovation relative to the mechanical metaphor (p. 64). He refers, for example, to Dickens’ Little Dorrit (1857) as a satire of the increasingly rule-bound British Civil Service: “Because the Circumlocution Office went on mechanically, every day, keeping this wonderful, all-sufficient wheel of statesmanship, How not to do it, in motion” (p. 76).Footnote 18

In his seminal “On Computable Numbers” (1936), Turing only used the word “mechanical” in his final move to refute Hilbert’s program for the complete mechanization of mathematics:

We are now in a position to show that [Hilbert’s] Entscheidungsproblem cannot be solved. Let us suppose the contrary. Then there is a general (mechanical) process for determining … (Turing, 1936, p. 262)

Following the war, the universal (Turing) machine would be detached from its original 1936 context and recast in the UK, with Turing’s approval, as a positive concept in connection with computer technology. This recasting appears notably in the documentation of the UK’s national computing project (NPL, 1946). In Turing’s obituary, Max Newman (1955) did consider that the emerging technology of “automatic” computing machines “were in principle realizations of the ‘universal machine.’” However, he did note that Turing initially conceived the concept “for the purpose of a logical argument” (p. 254). Turing’s 1936 concept of the universal machine served an instrumental use as part of a negative proof: it was conceived to refute Hilbert’s program for the complete mechanization of mathematics, and certainly not to serve as a general model for the human mind.

Turing (1950) did consider the possibility that the whole mind might be “mechanical” (p. 455), but this use of the word in the context of his discussion of human and machine intelligence was different from that of (Turing, 1936), which focused on the activity of human clerks. Since Turing’s machines, the discussion of mind and mechanism has largely centered on arguments for and against seeing the human mind as a “Turing machine” (Putnam, 1960; Lucas, 1961).Footnote 19 However, should an adult human mind be understood in this way, Turing satirized in 1948 in Dickensian style, emphasizing the unthinking following of orders:

This would mean that the adult will obey orders given in appropriate language, even if they were very complicated; he would have no common sense, and would obey the most ridiculous orders unflinchingly. When all his orders had been fulfilled he would sink into a comatose state or perhaps obey some standing order, such as eating. Creatures not unlike this can really be found, but most people behave quite differently under many circumstances. (Turing, 2004b [1948], p. 424)

Turing’s intelligent machines, rather than mechanical devices in the Dickensian sense of unthinkingly following orders and rules, were imagined to be capable of discussing a literary interpretation of Mr. Pickwick (Turing, 1950, pp. 446–447). They were based on his concept of a “learning” machine (1950), developed from his earlier concept of the “unorganized” machine (2004b [1948]), as opposed to the “universal” machine, which strictly follows instructions.Footnote 20

6.2 Turing’s Evolutionary Machines and Samuel Butler

Turing’s learning machines would acquire intelligence in childhood through environmental interference in the form of an external educator:

There is an obvious connection between this process and evolution, by the identifications

Structure of the child machine = Hereditary material

Changes of the child machine = Mutations

Natural selection = Judgment of the experimenter

One may hope, however, that this process will be more expeditious than evolution. The survival of the fittest is a slow method for measuring advantages. The experimenter, by the exercise of intelligence, should be able to speed it up. (Turing, 1950, p. 456, my emphasis)

Here is Turing’s defense of the possibility of rapid cultural evolution of intelligent machines on top of the universal machine (fixed hardware). In Turing’s bibliography (1950) appears The Book of the Machines, which is a major part of Erewhon (Butler, 1872). The latter is the most famous novel of Butler's novels and is widely regarded as a satire on the values and manners of Victorian England.Footnote 21 Like Dickens, Samuel Butler (1835-1902) was a Victorian novelist and social critic. He respected no boundaries in his attacks on the social, religious, and scientific establishments, and he used irony to express his deeply held views. Butler’s literary career offered a fierce critique of Charles Darwin’s theory of evolution by natural selection.Footnote 22The Book of the Machines had appeared in an earlier version in 1863 as “Darwin among the Machines.” As a satire of Darwin’s theory, it focuses on the systemic human displacement that is caused by the evolution of the machines, which emerges as the central theme of Turing’s heretical theory of intelligent machines. The central question Butler poses to Darwin is that either Darwin had deliberately ignored the problem of the creation of the original germ of life from which all other living things evolved, or he intended it to be assumed that life was in fact indistinguishable from matter and had somehow evolved from matter. Butler explores the second alternative, and Turing follows him in this. Here is Butler in Unconscious MemoryFootnote 23:

I first asked myself whether life might not, after all, resolve itself into the complexity of arrangement of an inconceivably intricate mechanism. Kittens think our shoe-strings are alive when they see us lacing them, because they see the tag at the end jump about without understanding all the ins and outs of how it comes to do so … Suppose the toy more complex still, so that it might run a few yards, stop, and run on again without an additional winding up; and suppose it so constructed that it could imitate eating and drinking, and could make as though the mouse were cleaning its face with its paws. Should we not at first be taken in ourselves, and assume the presence of the remaining facts of life, though in reality they were not there? (Butler, 2004 [1880], ch. 2, my emphasis)

Compare Turing’s, 1950 imitation game, proposed as a criterion for the presence of an intelligent machine. Butler continued on:

If, then, men were not really alive after all, but were only machines of so complicated a make that it was less trouble to us to cut the difficulty and say that that kind of mechanism was ‘being alive’ …? (Butler, 2004 [1880], ch. 2)

Compare Turing (2004b, [1948]), referring to chess-playing machines (p. 412): “Playing against such a machine gives a definite feeling that one is pitting one’s wits against something alive.” Along these lines, Butler asked:

… why should not machines ultimately become as complicated as we are, or at any rate complicated enough to be called living, and to be indeed as living as it was in the nature of anything at all to be? If it was only a case of their becoming more complicated, we were certainly doing our best to make them so. (Butler, 2004, [1880], ch. 2)

This is the essence of Butler’s view of the evolution of machines seen as a species. Butler added that he eventually realized that:

… this view comes to much the same as denying that there are such qualities as life and consciousness at all, and that this, again, works round to the assertion of their omnipresence in every molecule of matter, inasmuch as it destroys the separation between the organic and inorganic, and maintains that whatever the organic is the inorganic is also. (Butler, 2004 [1880], ch. 2)

“And it is to this second conclusion that Butler does indeed come,” explained the literary critic and Butler’s biographer Nicholas (Furbank, 1948 [1946], p. 59). “If it is to be either all mind or no mind,” Furbank added, “we might as well plump for all mind.” Furbank was nothing less than one of Turing’s two closest friends, whom Turing chose as his literary executor. Furbank’s biography of Butler, which develops the above quotes, was published by the Cambridge University Press in 1948, the same year that Turing and Furbank met for the first time at the end of the summer. Butler, as we have seen, is an author who, according to Turing’s mother, had captured the imagination of the young Turing. Now, compare Butler’s view just quoted with Turing (1950): “I do not wish to give the impression that I think there is no mystery about consciousness” (p. 447). He added: “But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper” (can machines think?).

Given Butler’s influence on Turing, it should not be surprising that Turing’s intelligent machines, in addition to learning how to write sonnets and discussing their interpretation (1950, p. 446), could also acquire the capabilities of: “be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new” (p. 447). After arguing this, Turing wrote: “These are possibilities of the near future, rather than Utopian dreams” (p. 449). This indicates that Turing thought his project was more realistic than utopian, but more generally it indicates that he presented the project in the optimistic frame of a utopia rather than in the pessimistic frame of a dystopia.

7 Turing as Dr. Frankenstein?

In her famous Frankenstein, or the Modern Prometheus (2012 [1818]), Mary Shelley (1797–1851) presented the character of Dr. Victor Frankenstein, a scientist who injects the spark of life into an otherwise inert body, only later to be horrified by his creature. There is a sense of despair in Mary Shelley’s narrative as we see Frankenstein pushing his project forward unreflectively. Mary Shelley’s concern appears, for instance, in the creature murmuring:

Like Adam, I was created apparently united by no link to any other being in existence; but his state was far different from mine in every other respect. He had come forth from the hands of God a perfect creature, happy and prosperous, guarded by the especial care of his Creator; he was allowed to converse with, and acquire knowledge from beings of a superior nature: but I was wretched, helpless, and alone. (Shelley, 2012 [1818], ch. 7)

Abandoned by Frankenstein, the creature lacked protection, assistance, and education in his (cultural) infancy. Later in her literary career, Shelley would refer to Evangelical discourses elevating not God but the mother as the theological and moral center of the household (Airey, 2019).

It is hard to blame Turing for being unreflective about the future of machines in society. For example, Turing wrote (1950): “I believe further that no useful purpose is served by concealing these beliefs [about the feasibility of intelligent machinery]” (p. 442). Drew McDermott (2007) acknowledged Turing’s contributions to this discussion, writing that “Turing was his own Mary Shelley.” McDermott thus saw Turing as a reflective, dystopian thinker, and this is at odds with Hayes and Ford’s view of him as a dystopian actor. And yet there seems to be less of the sense of despair typical of dystopian narratives in Turing’s remarks, and more of a sense of a utopian satire. Turing seems to have been far ahead of his contemporaries in his foresight of the unimagined possibilities of digital computers. Further, Turing did not believe that humans were superior beings — according to (Descartes, 1985 [1637], Part VI), “the lords and possessors of nature” —, quite the opposite. Turing’s refusal to see humans as masters of nature is at odds with Hodges’s suggestion that Turing embraced arrogant scientism along the lines of Mary Shelley’s Frankenstein.

Besides, in Mary Shelley’s novel, Frankenstein’s drive to control nature takes a specific form in his collapse of the natural and the artificial (synthetic). As noted above, Hayes and Ford pleaded with their colleagues to abandon the goal of creating an ‘artificial human' (1995); and in McEwan’s novel (2019), Turing’s work leads to the synthesis of embodied artificial humans with artificial skin and a host of other features that make them look like real humans. However, after one of Turing’s outrageous remarks in 1951, he said:

It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. It might for instance be said that no machine could write good English, or that it could not be influenced by sex-appeal or smoke a pipe. I cannot offer any such comfort, for I believe that no such bounds can be set. But I certainly hope and believe that no great efforts will be put into making machines with the most distinctively human, but non-intellectual characteristics such as the shape of the human body; it appears to me to be quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers. Attempts to produce a thinking machine seem to me to be in a different category. (Turing, 2004c, [1951], p. 486).

This suggests that Turing did not appreciate the prospect of making an artificial creature that resembled the human body and even thought this to be futile and bound to give unpleasant results. Turing’s reference to the “unpleasant quality of artificial flowers” underlines a distinction between the natural and the artificial, where “artificial” means synthesized as opposed to evolved, raised, or grown like in the processes of nature.Footnote 24

Turing envisioned raising his child machines as analogous to raising a human child. In his report “Intelligent Machinery” (2004b [1948]), Turing observed that the education of a human being may take “twenty years or more,” emphasizing the importance of social contact at least until “university graduation” (p. 421). He pointed out that “the isolated man does not develop any intellectual power” (p. 431). In (1950), Turing satirized that a machine could be tutored and could even go to school if it was not for the fact that “other children” would make “excessive fun of it” (p. 456). Susan Sterrett points out (2012, p. 709) that Turing’s insight that social and cultural interference in child rearing is crucial to the development of intelligence is not outdated in the light of research findings in cultural anthropology. Moreover, Turing did not consider the possibility of embodied, cyborg-like machines. He even focused his analogy between humans and machines on the case of “Miss Helen Keller,” who was able to educate herself despite her physical disabilities. Turing envisioned intelligent machines with limited agency in the world, functioning more as intellectual and educational tools. This is different from present-day AI, which can be regarded as agency without true intelligence (Floridi, 2023).

In this original sense, Turing imagined a society permeated by intelligent machines that would develop a subjectivity based on their own individual experiences, as opposed to a gigantic database of collective human experiences. Their behavior would go beyond a mechanical reproduction of what they were taught, including reflective responses similar to those of human children. Turing was remembered as saying: “proud owners [would] say ‘My machine’ (instead of ‘My little boy’) ‘said such a funny thing this morning’” (Newman, 1955, p. 255). Note that the owner and educator of an intelligent machine in Turing’s imagined future is a person, not a nation-state or a large corporation.

Overall, given Turing’s focus on the education of his “child” machines (Sterrett, 2012, 2017), Mary Shelley’s concern about scientists like Dr. Frankenstein abandoning their creature without any cultural upbringing could hardly apply to Turing.

8 Turing’s Intelligent Machine Utopia

The word “utopia” comes from the Greek “topos,” meaning “place” or “where,” and “u” from the prefix “ou,” meaning “no” or “not,” and has come to refer to an ultimately good but non-existent place. For this reason, “to call something ‘utopian’ has, from very early on, been a way of dismissing it as unrealistic” (Sargent, 2010, pp. 14–15). A utopian frame of mind arises as a result of bad times. The experience of bad times produces visions of a utopian future in which the evils of society have been eliminated, replaced, or transcended, which is usually for the benefit of all humanity.

Frank and Fritzie Manuel’s large study of utopian thought in the West connects another important element, which is technology:

Every utopia, rooted as it is in time and place, is bound to reproduce the stage scenery of its particular world as well as its preoccupations with contemporary social problems. Utopias … avail themselves of the existing equipment of a society, perhaps its most advanced models, prettified and rearranged. Often a utopian foresees the later evolution and consequences of technological development already present in an embryonic state. (Manuel & Manuel, 1979, p. 23)

The connection with Turing’s perspective on the future of early digital computers in society, as opposed to the most obvious scientific and military applications of computing in the early 1950s, seems striking.

What specific evils would have led Turing into a utopian frame of mind? We have seen that Turing avoided addressing issues explicitly, preferring to do so in passing, satirically (\(\mathrm{\S }\)4). However, through his remarkable use of irony, he provided several clues to his concerns about the time and place in which he lived. Once collected and connected, they may be revealing.

8.1 Turing’s Critique of Habits and Customs

Hodges seems to side with Jefferson in interpreting Turing’s philosophy of mind as reductionist, writing, for example (1983): “he introduced the idea of an operational definition of ‘thinking’ or ‘intelligence’ or ‘consciousness’ by means of a sexual guessing game” (p. 415). However, this view is at odds with the fact that Turing explicitly rejected the need to provide such a definition.

In fact, by introducing the imitation game, which requires a machine to imitate a woman in one version and a man in another, Turing moved the discussion of thinking and intelligence away from a standard analytical approach to philosophy. Hodges seems not to have appreciated Turing’s imitation game as irony with a point. He interpreted Turing’s man-imitates-woman variant of the game as “a red herring, and one of the few passages of the paper that was not expressed with perfect lucidity” (1983, p. 415).

It is worth pausing for a moment to appreciate the point of Turing’s imitation game. Gonçalves (2022b) reconstructed how Turing methodically varied the settings of the game, using case and control variants to experiment with the question: can player A successfully imitate stereotypes associated with player B’s type, despite their physical differences? Rather than promoting a universal concept of “human intelligence,” Turing’s 1950 argument can be understood as Sterrett (2000) has suggested, as critically addressing stereotypes of intelligent vs. mechanical, female vs. male, human vs. machine, and natural vs. artificial. Juliet Floyd (2017) adds strength to this interpretation, noting that Turing “saw the difference in levels and types as a complex series of systematizations sensitive to everyday “phraseology” and common sense, not a divide of principle” (p. 142). This, Floyd adds, “was because he always saw ‘types’ or ‘levels’ as lying on an evolving continuum, shaped by practical aspects, the user end, and mathematics.”Footnote 25 On gender specifically, Turing understood as early as 1950 that gendered behavior is taught and learned in the child’s upbringing (see Sterrett, 2000, 20122017). Turing was probably also responding to a thought experiment of Jefferson’s which suggested that gendered behavior is causally determined by the physiology of male and female sex hormones (see Gonçalves, 2022b).

Throughout his ironic exposition of the imitation game, Turing implicitly argued that intelligent machines could cause intellectuals to confront their own prejudices. He satirized the “theological” objection and the “arguments from various disabilities”:

God has given an immortal soul to every man and woman, but not to any other animal or to machines … I am unable to accept any part of this … The arbitrary character of the orthodox view becomes clearer if we consider how it might appear to a member of some other religious community. (Turing, 1950, p. 443)

The works and customs of mankind do not seem to be very suitable material to which to apply scientific induction. A very large part of space–time must be investigated, if reliable results are to be obtained. Otherwise we may (as most English children do) decide that everybody speaks English, and that it is silly to learn French. (Turing, 1950, p. 448)

What is important about this disability [being unable to enjoy strawberries and cream] is that it contributes to some of the other disabilities, e.g. to the difficulty of the same kind of friendliness occurring between man and machine as between white man and white man, or between black man and black man. (Turing, 1950, p. 448, his emphasis)

With his peculiar touch of irony, Turing addressed chauvinisms of religion and ethnicity, nationality, and race. These add to his strong critique of species chauvinism.

Beyond habits and customs, Turing’s irony addressed social and institutional structures, especially the division of labor and the role of intellectuals. By doing so, he may be seen as echoing the unheard voices from the relatively progressive environment of Bletchley Park, as Hodges (1983) nicely suggests, where “those excluded from participation in peace — ordinary men, the young, and even women,” all had played crucial parts (p. 311).

8.2 Turing’s Critique of Social and Institutional Structures

References to Frankenstein appeared in the English press in November 1946,Footnote 26 in connection with the advent of modern computing machines and the growing buzz about scientists building an “electronic brain.” This term, Hodges found through his extensive research (1983, p. 347), appeared in a speech by Louis Mountbatten (1946) to the British Institution of Radio Engineers on October 31, 1946, and the next day in an article about it in The Times. The British statesman spoke of a “revolution of the mind” and of chess-playing machines. He relied on information obtained from the National Physical Laboratory (NPL),Footnote 27 in particular Turing’s earliest postwar projections about the future of computing and the possibility of teaching machines to play chess, which Turing had presented in his report (2005 [1945], p. 389) to the Executive Committee.

It is known from Donald Bayley, who worked with Turing from 1943 to 1946, that Turing had spoken at the end of the war of his intention “to build a brain” (Hodges, 1983, p. 290; Sykes, 1992, p. 290; 25–27’). He joined the NPL in October 1945 to pursue this project, although he knew that the digital computing projects in the USA and the UK were an outgrowth of World War II. Turing sought to shift their use from mechanizing computation, primarily for warfare,Footnote 28 to supporting fundamental studies of nature and mathematics. Just after the media frenzy in early November, 1946, Turing wrote to Ross Ashby (1946): “I am most interested in the possibility of producing models of the action of the brain than in practical applications to computing.”

But Turing’s interests were in sharp contrast to the NPL leadership’s narrative, as the letter from Darwin, the NPL director, to The Times, shows. Darwin intended to clarify the NPL’s official position on the “electronic brain” polemic:

In popular language the word ‘brain’ is associated with the higher realms of the intellect, but in fact a very great part of the brain is an unconscious automatic machine producing precise and sometimes very complicated reactions to stimuli. This is the only part of the brain we may aspire to imitate. The new machines will in no way replace thought, but rather they will increase the need for it …Footnote 29

From a social perspective, it is worth noting one consequence of NPL’s statement: it distinguishes tasks and jobs that should be automated from those that should not. Turing did not accept this division. Intellectual jobs should experience the same displacement as labor-intensive jobs.

Before Darwin, Douglas Hartree (1897–1958) was the first intellectual to respond publicly to Mountbatten’s speech, and thus indirectly to Turing. In fact, Darwin’s letter to The Times was an official endorsement of a letter Hartree had written to The Times a few days earlier, attacking the use of the term “electronic brain” and claiming that computing machines “can only do precisely what they are instructed to do.”Footnote 30 Hartree made the same point in his inaugural lecture at Cambridge University (1947), where he described the ENIAC core unit — the so-called master programmer — as a device that allowed the “automatic control” of computation and “endows the machine with judgement in a restricted sense,” since the machine “can only do strictly and precisely what it is told to do” (pp. 20–21).Footnote 31

A few months after Hartree and Darwin’s public letters in November 1946, on February 20, 1947, Turing gave a talk to the London Mathematical Society about the machine building project at the NPL. At the end of his talk, he made the master–slave dichotomy a prevailing subject:

It has been said that computing machines can only carry out the processes that they are instructed to do. This is certainly true in the sense that … the intention in constructing these machines in the first instance is to treat them as slaves, giving them only jobs which have been thought out in detail, jobs such that the user of the machine fully understands what in principle is going on all the time. (Turing, 2004a [1947], pp. 392–393)

Turing resumed observing that “[u]p till the present machines have only been used in this way,” and asked: “[b]ut is it necessary that they should always be used in such a manner?” (p. 393).Footnote 32 Turing’s plea to liberate “machines” from slavery followed his questioning of the ethics of master programmers:

Roughly speaking those who work in connection with the [National Physical Laboratory Automatic Computing Engine] will be divided into its masters and its servants. Its masters will plan out instruction tables for it, thinking up deeper and deeper ways of using it. Its servants will feed it with cards as it calls for them … As time goes on the calculator itself will take over the functions both of masters and of servants. (Turing, 2004a [1947], p. 392)

The displacement of both masters and servants by the machine was thus implied early on in Turing’s postwar communications. He resumed this and casually revealed that the masters-servants division, in this context, also corresponded to a gender division: “[o]ne might for instance provide curve followers to enable data to be taken direct from curves instead of having girls read off values and punch them on cards” (ibid.). Here, Turing’s earlier references to “Man” as a masculine generic were in fact materially marked by the male gender,Footnote 33 although his focus in this passage is on the division of labor and power imbalance between “intellectual” and non-intellectual, “mechanical” work that was evident at Bletchley Park. Turing thus emphasized his concern that the “masters” would perceive intelligent machines as a threat to their dominant position:

The masters are liable to get replaced because as soon as any technique becomes at all stereotyped it becomes possible to devise a system of instruction tables which will enable the electronic computer to do it for itself. It may happen however that the masters will refuse to do this. They may be unwilling to let their jobs be stolen from them in this way. In that case they would surround the whole of their work with mystery and make excuses, couched in well chosen gibberish, whenever any dangerous suggestions were made. I think that a reaction of this kind is a very real danger. This topic naturally leads to the question as to how far it is possible in principle for a computing machine to simulate human activities. (Turing, 2004a [1947], p. 392)

If machines displaced the lower classes of workers, but not the higher ones, that was the real danger, according to Turing.Footnote 34 Some of the “masters” he was addressing, intellectuals in positions of power, must have been in the audience of this lecture when Turing said to their faces that they would try to repress intelligent machines. Feeling threatened by “any dangerous suggestions” the machines might make, the intellectuals would “surround the whole of their work with mystery and make excuses, couched in well chosen gibberish.”

Hodges (1983) linked Turing’s remarks as a response to Darwin’s letter to The Times quoted above, observing: “To describe such careful and responsible statements [Darwin’s] as ‘gibberish’ was not the most tactful policy” (p. 357). However, it can be noted that in 1947 Turing deliberately challenged Darwin, Hartree, and the NPL senior management, and possibly other mathematicians in the audience of his lecture. In 1948, Turing generalized his argument to “intellectual people” and their “unwillingness to admit the possibility that mankind can have any rivals in intellectual power.” In 1951, he addressed dominant “intellectuals” who would be “afraid of being out of a job.”

It can be understood that it was less the “human race” as a whole that Turing welcomed receiving a humiliating lesson from machines, but a particular class or group — chauvinists of all kinds, especially intellectuals in dominant positions. The intelligent machines Turing envisioned would be able, contrary to what Hartree expected, to do more than “strictly and precisely” what they are told to do, and contrary to what Darwin expected, to “imitate” not only the lower realms of the intellect but also the higher ones typically associated with “thought.” Therefore, they would affect not only jobs that are considered lower, but also jobs that are considered higher, potentially challenging existing social and institutional structures and helping to democratize power in society.

9 Concluding Remarks

I suggest that Jefferson’s image of Turing as a scientific Shelley is more profoundly correct than Hodges could present, given his concern to provide a multifaceted portrait of Turing.

Percy B. Shelley’s masterpiece Prometheus Unbound (1820) is a four-act lyric drama based on the trilogy of Prometheia by the Greek playwright Aeschylus. The classical trilogy concerns the torment of the Greek mythological figure Prometheus, who defies the gods and gives fire to mankind, for which he is condemned to suffering and eternal punishment by the god Jupiter, a representation of order and power. The stolen fire here has the metaphorical meaning of intelligence, whose specific form varies with the versions of the myth (e.g., political intelligence in Plato’s version of the story in Protagoras). In contrast to Aeschylus’ version, Shelley presented a Prometheus who, with the help of his beloved Oceanid Asia, manages to free himself from Jupiter’s oppression and start a bloodless revolution. In the words of Mary Shelley (1959 [1839]), Percy “followed certain classical authorities in figuring … Prometheus as the regenerator, who, unable to bring mankind back to primitive innocence, used knowledge as a weapon to defeat evil, by leading mankind beyond the state wherein they are sinless through ignorance, to that in which they are virtuous through wisdom” (pp. 684–685). In Turing’s utopia, intelligent machines rely on learning and reason to bloodlessly confront the ruling elite intellectuals, just as Asia’s supportive character mediates Prometheus’ confrontation with the gods.

Exploring the first image of Turing, this article has examined Turing’s irony in its social and historical context and followed the internal logic of his arguments to their logical conclusion. By doing so, it opens a new historiographical perspective on Turing’s work. It is possible to see that his intelligent machine utopia is directed against social and institutional structures, chauvinistic views of society and nature, and intellectuals who might sacrifice independent thought to maintain their power. Such intellectuals, Turing hoped, would eventually be surpassed by intelligent machines and transformed into ordinary people, as work once considered “intellectual” would be transformed into non-intellectual work, “mechanical.” Educated by individuals, rather than nation-states or large corporations, his ever-learning machines could help distribute power more evenly in society. Turing believed that the possibilities of the machines he envisioned were not utopian dreams.

Was Turing realistic? That may be seen as an open question. Recent scientific and technological advances, notably those demonstrated by a system called ChatGPT (Floridi, 2023), have made Turing’s argument that machines may indeed outstrip humans in intelligence more realistic. At the same time, the power of machine intelligence seems increasingly concentrated in the hands of a few. The idea of ever-learning machines, whose intelligence would grow out of their individual experiences and help democratize power in society, may therefore still seem a distant reality.