When we boost the capacity of our AI systems to analyze as much data in as little time as possible, it is going through the hardship once experienced by Victorian men. Go read the earliest letters of John Stuart Mill: thrown too young into the world of classics, handed the crushing duty to learn and ultimately to write, his training and decision-making process may not be so different than our machines whose inferences can either rediscover, at their best, natural selection or, at their worst, eugenics.

The ideal Victorian subject, of course, is a privileged man. We feel sorry for him because being that version of man is hard, but not sorry enough because he doesn’t have to be that man. A better example of the nineteenth century unprivileged would be a woman like George Sand who, though a perceptive observer of nature and human behavior, discovered neither natural selection nor eugenics. She did not have time; she was too busy writing for a living. Because she was unprivileged as a woman, she did not have a safety net to fall back on and she refused to follow her male counterparts in risky financial speculations and literary experiments.

I believe we should design our AI systems like George Sand: the right kind of unprivileged subject. It is an argument about time. Darwin could mull over the origins of species for decades until a Wallace prompted him to publish on them in 1859. Sand, however, whipped up a novel three years earlier on those same origins and unlike her Victorian contemporary, she could not afford (literally) to have waited that long for an instant success. She was also invested in physiognomy—inferring character from facial expressions—but unlike Galton she could not afford (maybe also literally) inventing new unborn enemies because she had enough of them well and alive to recognize.

We make our AI systems quick for efficiency: to enable them to scan as much data as possible. We need to make them quicker out of urgency: to compel them to scan the most relevant data. We know limits trigger creativity, not freedom. Sand not only experienced that limit, she also recommended it. Her characters once debated the usefulness of analyzing too much data in the work of the historian, who wields the same inductive reasoning as AI systems:

“That is why we should probably not make too much history out of people’s memoirs, for they are almost always the work of prejudice or passions of the moment. It is the fashion now to dig these out with great care and to bring forward many trifling facts not generally known and which do not deserve to be known.”

“Yes, you are right. If the historian, instead of standing firm in his belief and worship of great things, lets himself be misled or distracted by small ones, truth loses all that reality invades.”

Trifling facts not generally known and which do not deserve to be known! A harsh epistemology by a harsh woman—a harshness rising from urgency, not efficiency. The worship of great things may be more like seeking an explanation such as natural selection rather than eugenics: Darwin’s inductions on the shores of the Galapagos may not have been a vacation. Freud too relied on inductive thinking and I spend a lot of time in my gender studies classes debunking the myth among my students that he was a privileged man coming up with bizarre theories about women from a remote desk. There was an urgent need to understand bizarre illnesses: there were women around him who helped him understand urgency.

What if machine learning can follow Darwin and Freud, but avoid the abuses of their successors, respectively, eugenics and conversion therapy? What if machine learning can rely on the worship of great things—the urgency of science—but avoid being misled or distracted by small ones—the basis of pseudoscience? Sand said it: truth loses all that reality invades. When an AI system spends too much time scourging and looting the surface, it cannot find the treasures below.

We need to remember that the history of induction is a gendered one. Women always relied on inductive reasoning—out of need rather than want. AI has so far followed the male course of the history of thinking: it needs to account for everything we learned from women’s history and the effect of social, racial, economic and sexual privilege on decision-making.

When we have too much time on our hands, when time constraints are loose in information processing, we are paradoxically less capable of making effective decisions. It does not mean we need snap judgements, nor intuition. George Sand thought prolonged reflection was as dangerous as recklessness. She counselled against an insulting curiosity that strays us away from truth.

Further exchanges between computer scientists and the humanities can shed light on how to optimize artificial intelligence without repeating all the mistakes of human intelligence. At stake is a greater theory of mental equivalences: how can we replicate the cognitive state of a Sand, Darwin, or Freud and map out its urgent conditions into our AI systems?

I will end with one concrete example from Sand’s own decision-making process. Following the blunders of the 1848 French revolution which she had spearheaded and the coup d’état of Louis-Napoleon Bonaparte, Sand faced a dilemma. Leave France like all her friends, or stay? She stayed—angering some of her friends. The difference was that she was not privileged like them: they were men, she was a woman. They could start a new life in exile—her own would have been precarious. In retrospect, she seems to have made the right decision. The trick was not life, but death – the idea that she would not survive, in spite of her republic convictions or prior data. Perhaps we need to shift the emphasis about the difference between natural and artificial life. Perhaps we can teach AI about urgency by teaching it to replicate not the living, but that which can die.