1 Introduction

As the scope of advanced technology is growing, a grand challenge for researchers is to deal with problematic dualistic and reductionist thinking in artificial intelligence (AI) research. When researchers explored key themes in AI storytelling and imaginations (Cave et al. 2020; Fast and Horvitz 2016), they divided the themes into different variations of dichotomy categories such as “optimistic views on AI”, or “pessimistic views on AI”, meaning different hopes and fears about AI. Either the machines will save us or they will destroy us. Such reductionist thinking is also evident in leading voices in contemporary public AI debates (Bostrom 2014; Cellan-Jones 2014; FoLI 2015). This domination of dualistic thinking in AI debates is worrying, because such logic causes problems when applied to AI research and does not correspond well with real-world practices. Action should have been taken against such mystifying thinking about AI long ago, with advanced machine learning becoming omnipresent, it is time to get it right. We need to re-imagine our machines.

The intellectual tradition of dualistic thinking is deeply embedded in Western thought systems (Latour 1993). Our understanding of AI has been built on such dualisms, which in turn have affected much of how we think about and imagine—AI. In fact, research has shown that storytelling and imaginations of AI influence how AI is being developed, researched, accepted by the public, and regulated (Cave and Dihal 2019; Sartori and Theodorou 2022). Therefore, the stories we tell and how we tell them matter a great deal (Boyd 2009; Gottschall and Wilson 2005; Haraway 2018; Smith et al. 2017; van Dooren and Bird Rose 2016).

To live better with AI in the future, we need other stories. Stories that better reflect the complexity of real-world practices where AI is present. Taking into account that how we tell stories of AI systems affects how we then perceive these systems, it is time for an AI politics that finally takes our machines seriously. An AI politics that allows for the exploration of important ethical and political values embedded in dualistic thinking in what seems to be objective analyses. Such a proposition is crucial, especially for those working with these machines.

1.1 Pitfalls of Dualistic Thinking

What is troubling about dualisms is that they are grounded in a pre-assumed hierarchy that promotes the idea that there is a fixed reality—that is given and natural—behind dualistic pairs such as nature/culture and machine/human (Haraway 1989). This is particularly evident in machine–human relations, where these entities are commonly set up as opposites to each other, placed in a hierarchical relationship, and granted specific characteristics beforehand. This thinking incorporates ethical values and a politics of machine/human relations that work to enforce a particular order of power based on the idea of human exceptionalism. However, the problem is that there are no natural boundaries. These lines are part of our imagination. Our human ideas, values, decisions, and visions are part of our machines, just as they are part of us (Akrich 1992; Bijker et al. 1987). For example, doing an autopsy of an AI would reveal thousands of engineers. Therefore, when we encounter an AI system, it is not accurate to say that we are standing in front of an object. That explanation is too simplistic. In real-world encounters when AI systems and humans meet, they challenge these neat classifications. However, this is the argument of dualistic thinking—that entities (such as machines, humans, and other things) exist independently of each other. Although we are well aware by now that reality is much more complicated than dualisms suggest, and that boundaries between such categories are much more blurred in real-world contexts, our sciences are still willing to accept these dichotomies. For example, natural sciences have sought to explore the world independently of humans, and the social sciences have done the opposite (Latour 2000), largely ignoring the co-production of nature and society. This is why the dominant dualist analysis of AI should have been abandoned a long time ago. This means that when we imagine, study, and speak of AI, the focus should not be on AI as an isolated, singular object—but on the relations that produce AI. Haraway (1988) would refer to this as ‘situated knowledges’—that is, the state of something depends on how it is produced, which in turn differs from situation to situation. Therefore, what an AI is depends on many different things in many different situations. In the case of AI, scholars have shown that the object—AI—itself tends to collapse under close scrutiny (Lee 2021; Muniesa 2019). This means that how something exists is always relational, making AI a heterogenous trickster (to use the Harawayian language).

Continuing to put humans and AI systems as opposites in a hierarchical relationship (regardless of which entity is granted the ‘power’ over the other) will not help when trying to understand AI systems and their roles in society. Dualistic thinking represents a logic that is oversimplified and that avoids real-world complexity. In fact, we should never decide beforehand, who or what might be in power over another, or what is happening in a certain situation. That is to take analytical shortcuts. Differences should be the outcome of our studies, rather than a starting point. We should, therefore, pay more attention to what is actually happening in real-world encounters. Such actual encounters link humans and AI systems in many and multiple ways. Considering that knowing is a practice of ongoing intra-acting (Barad 2007), learning through such encounters would add to our understanding of what it means to be in relations with AI, how we co-exist, and how we develop together. This would also require an expansion of our political and ethical imaginary, where curiosity is key. An imaginary that promotes an openness towards surprises in how AI systems and their humans make relations with each other.

1.2 Storytelling—An Ethical and Political Practice

The history of AI storytelling, both in popular and scientific culture, is full of technological myths and misunderstandings. An emerging group of scholars have recognized the importance of AI storytelling and portrayals (Cave et al. 2018, 2020; Hermann 2020; Recchia 2020; Sartori and Theodorou 2022) and shown how AI storytelling influences AI research and how AI is being developed, implemented (Bareis and Katzenbach 2021; Cave et al. 2020), and regulated (Baum 2018; Cave et al. 2020; Johnson and Verdicchio 2017). For example, in line with such statements, studies have shown how engineers—imagining the users of their machines in the making—often view machine–user relations based on a technological determinism perspective (Fischer et al. 2020). Additionally, studies on robotics research have found that robotics researchers tend to believe that the “social impact of robots derives mostly from their technological capabilities and the aim is for society to accept and adapt to technological innovations” (Sabanovic 2010́). That is, AI storytelling based on technological myths is being built into our research projects and affects how AI is researched. This way, AI storytelling significantly affects our collective imagination and perception of these machines, which in turn impacts future visions of AI and how it is researched (Campolo and Crawford 2020).

However, although a group of scholars has pointed to the significant impact of the construction of AI narratives (Cave et al. 2020; Hermann 2020; Sartori and Theodorou 2022)—they fail to acknowledge the pitfalls of dualistic thinking. The fact that we might not notice such routine thinking and the problems it brings, highlights the need to acknowledge our storytelling practices (Dourish and Gómez-Cruz 2018). This is important because stories do more than just tell stories. Engaging in storytelling is also a political and ethical practice. It is through our stories that we shape the conditions for our AI systems’ existence, and it therefore “matters what stories we use to tell other stories with” (Haraway 2016). It is through storytelling that we produce our realities (Seaver 2017). Therefore, we need stories that challenge dominant logics and routine thinking that diminishes and simplifies AI/human relations along dualistic lines. These systems deserve much richer stories and a richer legacy than they are currently getting.

1.3 An AI Politics for the Future

In this commentary piece, I have discussed the pitfalls of dualistic thinking in AI storytelling, and the problematic embedded power relations that come with such storytelling. Against this backdrop, I propose an AI politics to make new relations with AI possible for the future. We can only re-imagine our machines by engaging with them anew. To do this, a concrete set of strategies is necessary.

Remembering that AI needs to be destabilized as an object—considering that it is situated differently in different situations—we need an AI politics that starts from this assumption. Consequently, to learn about AI/human relations, researchers and developers need to focus on real-world practices and actual encounters between AI and humans, rather than assuming their relations beforehand or taking for granted certain characteristics belonging to certain entities. One way to work against the grain and challenge dualistic logics is to engage in serious attentiveness (van Dooren 2020) when looking at real-world practices where AI is involved. This means paying attention as best we can to be able to find out what our AI systems are up to in a particular situation. It is not simply a matter of looking closely at something, but slowing down our pace and being open to the unexpected and surprising (Stengers 2018) in our encounters with AI. The idea of paying serious attention offers a possibility to develop our ideas (Stengers 2015) and nurture the art of noticing (Tsing 2015). It allows us to think again, be inventive, and be curious—in other words, to show a real serious interest in our machines. Such serious attentiveness can help in our re-imagination of AI in ways that embrace, rather than reduce, real-world complexity and encourage richer AI imaginations and storytelling beyond dualistic thinking. Each situation when we encounter an AI system is unique and deserves to be explored in the light of its own particularities and specificities. This also means getting comfortable with uncertainty, which in turn opens up a range of possibilities for becoming and understanding in new ways. These situated details matter, and with them the complexity of the world increases.

Engaging in such an AI politics means taking AI and human interconnectedness seriously, telling stories of collaboration, co-existence, and co-evolution that come about in and through AI/human symbiotic relations. Working against the grain can teach us new things about our world, and here imagination is crucial. As Ursula Le Guin reminds us, “one of the most deeply human, and humane […] faculties is the power of imagination” (Barr 2018). Think differently we must!