1 Introduction

Although vital for policy and practice, an understanding of public perceptions of artificial intelligence (AI) in Defence settings has been neglected in the research literature. This understanding is essential for crafting informed policy, governance, and assurance approaches, and identifying opportunities to inform the public about the capabilities, limitations, benefits, and risks of AI across Defence. Moreover, accurate information needs to be available, so that the public can develop well-informed opinions on this matter; this is important as these views can influence policy-making, investment, and other decisions made regarding the use of AI in Defence. In instances where research and development are stalled through over-legislation because of public concerns driven by misconceptions, there could be a clear impact on national UK Defence. Excessive fears about AI could impede the development and use of beneficial systems and restrictive regulatory processes (Cave et al. 2018, 2019; Cave and ÓhÉigeartaigh 2019). In contrast, over-trusting could leave individuals open to potential security issues, abuse, or a general loss of autonomy (Cave et al. 2019; Schepman and Rodway 2020). Therefore, gaining an understanding of the public’s attitudes and perceptions towards AI in any area of today’s society is of critical importance. As the public forms the basis for the electorate in many global democracies, they have the capacity to shape the face of government policy-making, which can in turn influence research spending and regulatory polices (Zhai et al. 2020).

The notion that AI innovation and use could be shaped by public opinion and understanding has been recently documented in USA, where several states enacted bans on the use of AI-driven facial recognition systems (Conger et al. 2019; Schneier 2020). The backlash was driven in part by concerns related to privacy and consent attached to the collected images. However other countries might continue forward with developments with little or no regard for the key concerns highlighted previously (Morgan et al. 2020). Therefore, engaging with the public in such a sphere of development is critical to ensure a commitment to the ethical use of such technologies, as well as ensuring the key developments in the field are communicated as transparently as possible (Morgan et al. 2020). However, the topic of AI in Defence has largely escaped an empirical research focus in the field. Much of the research linked to public perceptions of AI relates directly to general applications of AI (Schepman and Rodway 2020, 2022) and lacks specificity for applications linked to Defence. Yet, applications of AI in Defence settings are increasingly being developed and used, including for sensors for situational awareness, guidance on risks, simulation and training, and decision support at operational and strategic levels (Du et al. 2020; RAND 2021; Wasilow and Thorpe 2019). The current research aims to bridge this perceived gap by presenting an initial exploration of the perceptions, beliefs, and attitudes of the public surrounding the application of AI in Defence settings.

1.1 Defining AI and AI in Defence

Whilst there have been a variety of attempts at defining AI, there is currently no general agreement on an accepted definition (Gillath et al. 2021; Wang 2019). Wang (2019) presented a detailed discussion on the issues surrounding the creation of a working definition for AI, but also proposed that the absence of a clear definition is not of critical importance for the field. This is largely due to the underlying complexities of fields associated with AI, such as those related to intelligence. However, not having a clear, working definition can also make communication of concepts problematic, and whilst such issues may not be of direct relevance for the field as a whole, they can present clear issues where public-facing communications is concerned.

Gillath et al. (2021) presented a working definition that serves as a good, overall reference point for many aspects of AI, including those related to Defence applications:

“Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction. Particular applications of AI include expert systems, speech recognition, and machine vision. Examples of AIs include personal helpers (like Siri and Alexa), medical diagnostic aids, and self-driving vehicles” (p. 1)

Defence-specific definitions of AI have been offered less-frequently within the research literature. The Defense Innovation Board (2019) suggested that AI in Defence could be defined as:

“An artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from its experience and improve when exposed to data sets” (p. 8).

The potential applications for AI in Defence are virtually unlimited, and could include aspects of logistic support, simulation, target recognition, and threat monitoring (Taddeo et al. 2021). Such uses have been further categorised according to three further typologies; Sustainment and Support; Adversarial and Non-Kinetic; Adversarial and Kinetic Uses (Taddeo et al. 2021). Sustainment and Support make reference to the use of AI in ‘back office’ functions, including operation and logistic functionality (Taddeo et al. 2021). Adversarial and Non-kinetic applications of AI in Defence include those related to cyber-Defence, as well as cyber-offensive capabilities. Finally, Adversarial and Kinetic applications are those directly involved in combat operations, including the use of AI systems to aid the identification of targets, and the use of Lethal Autonomous Weapon Systems (Taddeo et al. 2021).

1.2 Why are public perceptions of AI and AI in Defence important?

Previous research has noted that attitudes and perceptions towards AI have a significant role to play in acceptance of such emergent technology (Lillemäe et al. 2023; Othman 2021; de Fine Licht and de Fine Licht 2020; Schepman and Rodway 2020; Selwyn and Gallo Cordoba 2021). Luccioni and Bengio (2020) suggested that the field of AI has passed a threshold where the field is no longer the sole domain of experts with a focus on the technical concerns. Other researchers have noted the importance of transparency when it comes to AI-decision-making particularly in critical areas if negative outcomes are to be avoided (de Fine Licht and de Fine Licht 2020). There has been increasing attention on how non-experts, such as policy makers, regulators, and media outlets are engaging with concepts aligned with AI (Selwyn and Gallo Cordoba 2021). In turn, it has been acknowledged that greater attention should be paid to the general public’s perceptions, sentiment, and opinions towards AI for several key reasons. First, such perceptions can have a direct influence on future implementation of AI technology. Second, gaining a better understanding of current perceptions held by the public also provides an opportunity to assess systematic gaps and misunderstandings that may be an integral part of such perceptions. Such shortcomings can be improved through better communication and knowledge building (Selwyn and Cordoba 2021). Zhai et al. (2020) also noted that “public perceptions and concerns about AI are important because the success of any emergent technology depends in large part on public acceptance” (p. 138). These authors noted that for many, knowledge and understanding related to AI are often vague and usually one-sided, mostly due to the influence of mass media. In recent research by Lillemäe et al. (2023), they noted that overall positive attitudes towards AI were related to a positive perception of military-based AI applications. Individuals were also less likely to see military applications of AI as being risky if they had an over positive perception of AI. Conversely, those individuals who perceived AI as a threat also increased the risk perception of AI in military applications as well as increasing the level of doubt regarding overall decision-making capabilities of such systems (Lillemäe et al. 2023). It is evident that public perception and acceptance of AI in Defence settings can be clearly driven by other external factors, in particular popular narratives that are associated with AI.

1.3 Narratives and the perceptions of AI in Defence

Narratives are seen as playing a critical role in the communication and shaping of ideas throughout the history of technological development (Beer 2009). Narratives are collections of text, images, events, and cultural artefacts that describe a particular story. Recent historical examples of the way in which narratives have influenced public debate around the evolution of divisive topics include genetic modification, nuclear energy, and climate change (Cave et al. 2018).

The role of narratives and the influence of the mass media on public perceptions, attitudes, and beliefs associated with AI in Defence cannot be underestimated. For decades, Western militaries have been exploring the use of technology to engage in war from a distance, “in a way that is consistent with (liberal) values of restraint” (see Carvin 2022, in Depledge 2023). Over time, as societies change, so do views and support for the military in terms of the force militaries wield, and for what end and at what cost (Depledge 2023). How AI fits in with these opinions and is seen to influence the real and perceived costs associated with Defence will in turn affect acceptance or rejection of technologies that harness AI. For example, if one use of AI is to provide a capability that minimises loss of life to our soldiers, then it may be seen as more acceptable than its use in other circumstances.

The field of AI is littered with complex ideas, technologies, and theories, alongside applications that surrounding numerous spheres of interest (Zhai et al. 2020). The average individual will access information on a topic that is presented in a manageable, easily accessible way. Hence, the prototypical view of what AI in Defence is may be clearly shaped by mass media narratives (Shih et al. 2008; Zhai et al. 2020).

Perhaps, the most prominent narrative, particularly when it comes to public perceptions and attitudes towards AI use in Defence is that of dystopian constructs and futures, often shaped by popular media and science fiction (Cave et al. 2018, 2019). Common to these narratives is the notion that AI applications that have been created for Defence purposes are subverted, or gain ‘awareness’, and in turn are used against humanity, ultimately leading to enslavement or annihilation (Cave et al. 2019).

In contrast to dystopian perspectives, others have noted that the public also hold unrealistic expectations of how AI could help society, creating misplaced trust (Cave et al. 2019). Even though such feelings around AI are based on unfounded constructs, such attitudes could serve to directly influence public perceptions of AI.

As the use of AI expands out into more contentious, emotionally, and ethically charged areas such as those related to proposed Defence uses, it will be even more important to assess the current perceptions of the public. If a current, in-depth understanding of what the public knows (or thinks they know) about AI use in Defence contexts is achieved, gaps in understanding gap be highlighted, with the potential to provide clearer, accurate, and relevant information that targets potential misunderstanding.

1.4 Study aims

The aim of this research is to present an initial exploration of the current public perceptions, beliefs, and attitudes towards the use of AI in Defence. Given the scarcity of existing literature on the current topic under discussion, an exploratory, inductive, qualitative approach was adopted. Whilst the research literature reviewed above presents some potential indications as to how the public create their perceptions of AI use in Defence, and what these could be, the work relies heavily on applications linked to more general uses. This existing work also fails to fully explore the more emotive aspects of AI use in Defence, particularly where such uses could include the use of lethal force. The overall goal of the current study is to provide a clearer, current picture of how the public views AI in Defence, key concerns, and potential barriers to the implementation of such applications.

2 Method

2.1 Participants and procedure

Potential participants were identified via three databases of research participants managed by the Department of Psychology (Nottingham Trent University). Selection criteria included the age of participants and access to the Internet and Microsoft Teams. Table 1 provides an overview of group sizes and composition.

Table 1 Focus group composition

Participants were invited to take part via e-mail and social media. A poster was sent to all potential participants contained in the databases described above inviting them to take part in focus group interviews. In addition, the recruitment poster was shared on suitable forums on social media (Twitter, Facebook, and Instagram). Recruitment continued until a minimum number of participants were reached for each group. It was decided that participants would be grouped by age, to form homogeneous groups to enable them to feel comfortable and facilitate discussions. Two researchers were present throughout the focus group sessions to help facilitate discussion, as well as providing a potential back-up should issues with connection to the group arise. The focus group discussions lasted between 50 and 64 min and were conducted online by two of the researchers (MKM and JS) via MS Teams between 16 and 25th May 2022. Participants received a small honorarium for taking part in the study in the form of a £10 gift voucher. Audio recordings were generated within MS Teams and then fully transcribed.

Prior to the session, all participants were presented with detailed information on the purpose of the study, outlined the procedure for the focus group sessions. Participants were reminded that the focus group sessions should be viewed as a safe environment for everyone to express their views in confidence and without fear. Participants were also reminded that the focus group sessions would be audio-recorded, and that other participants in that session would be able to view their names via MS Teams. They were asked to respect and protect other participants’ anonymity. They were allowed 72 h to read the briefing material and provide their written consent and return that to the lead researcher before commencement of the focus group session.

The project was granted a favourable ethical approval by the Ministry of Defence Research Ethics Committee (MODREC), reference number 2109/MODREC/21.

2.2 Materials

Based on a review of the research literature that currently exists on general AI applications, a series of questions were developed by the research team to facilitate participants to discuss and explore their understanding and views of AI and its application in Defence settings (see supplementary material). Initial questions encouraged participants to share their knowledge and use of more general aspects of AI, with the latter portion of the focus groups questions homing in on specific aspects related to AI in Defence. These questions were loosely grouped around initial perceptions of what AI in Defence could be, aspects of trust related to use, and when the use of AI in Defence is perceived as acceptable (Table 2).

Table 2 Emergent themes and sub-themes from focus group sessions

2.3 Data analysis

Focus group recordings were transcribed verbatim. Thematic analysis was applied here as an approach where the aim was to identify repeated patterns of meaning across the data (Boyatzis 1998; Braun and Clark 2006). This was appropriate as “a theme captures something important about the data in relation to the research question, and represents some level of patterned response or meaning within the dataset” (Braun and Clarke 2006, p 86).

In the current study, data were analysed using an inductive thematic analysis method whereby the researchers read, re-read, and explored the data in search of themes, sub-themes, patterns and relationships between these, and insights on their meaning. The following broad steps were followed, as outlined by (Braun and Clarke 2006): familiarisation with the data, generation of initial codes, searching for and creating themes, reviewing themes, and refining and naming themes. The re-examination of text facilitates the identification of commonalities and differences within the text, resulting in the formation of themes and sub-themes. Identifying overlaps and differences between these indicate patterns and relationships between these and their meaning.

Transcripts were read several times to understand the general life histories and events. Initial coding and early thoughts and insights were noted. Next, data were coded to clarify emerging themes and sub-themes, and notes were made of patterns and re-emerging relationships. The write-up of themes for each sub-group then commenced, and data explored further to establish and clarify sub-themes (and the variables that made up each of these), patterns, and relationships. This act of writing up added another layer of analysis, to provide a more nuanced understanding of themes and sub-themes, overlaps and differences between these and insights on their relationships.

The analytical process was led by one researcher (JS) and supplemented iteratively by a second (MKM). The final configuration of themes and the validity of the analysis were then examined by two more researchers (LH and JB) and some themes expanded following discussions among team members (Braun and Clarke 2006). Analytical software was not used. As such, the lead researcher was immersed in the data, which allows for rich insights to be identified. However, qualitative analysis of this nature can be biased due to subjectivity. As such, the four researchers, who are experienced in the application of this analytical approach, were cognisant to make all efforts to ensure that results are evidence-based and exemplify the data.

3 Results

The analysis of the focus group data revealed a variety of themes that explored the basic experiences and perceptions of AI in general, as well as exploring more specific concerns, attitudes, and perceptions towards AI in Defence. Themes and sub-themes are summarised in Table 1. The themes are not presented in any order; all are equally prominent.

3.1 Theme one: the human within the system

The first theme related to the interplay between AI and human users/operators and the role of the human and that of AI. This theme captured something of a paradox in participants positions related to the role of humans within the whole AI infrastructure for Defence. On the one hand, participants acknowledged that AI is created by humans, and serves to function within certain parameters that have been defined by its creators. However, participants were also keen to stress that they realise there is an inherent issue with this process, with the potential for biases to creep into such systems because of humans being involved. However, even though this was the case, participants stressed the need for humans to be involved in outcomes and decisions and were generally against allowing AI to make a final decision in any shape or form in Defence settings. Participants were also keen to stress the need for some form of accountability within a system, whether this be with the programmers themselves or some other authority figure who had instructed the use of a system. Four sub-themes emerged.

3.1.1 Humans create AI to do what we tell it to do

Many participants discussed the nature of AI in terms of how and why it was created, and what it can do. For many participants, there was a clear assertion that AI is something that operates within set parameters, and that these parameters are or should be set by humans:

“It is a system that humans have created that can do certain functions that we tell it to” (FG2)

“A machine but working within certain parameters as being set by human beings” (FG4)

“AI is created by humans but is not living itself but is designed to react within parameters defined by humans” (FG4)

“There isn’t any kind of independent decision-making I suppose they’d still need to be responding to what’s in the code or the programmer has decided needs to be the next step or branching out” (FG2)

Central to participants conceptualisation of what AI is, is the notion that it is something that has been created by humans to function within a certain set of parameters. For many participants, there is no scope for functionality or development outside of these parameters, a stance that is generally counter to the purpose of AI, given the potential for it to learn and develop beyond its initial state.

3.1.2 Humans introduce bias into the system

Participants discussed the potential for AI to exhibit biases in decision-making processes because of the background programming that has taken place to create such system. Many participants were genuinely concerned that the biases that exist in humans could creep into the very systems that are proposed to be devoid of such.

“Whatever is created, it takes on the biases of whoever is creating it” (FG2)

“I just think it’s very risky because you can't take that potential bias or sway out of the kind of initial code that’s been programmed by humans”.(FG2)

“Technology is only as good as the people creating it and setting the intent for what the technology needs to carry out” (FG1)

The latter quote exemplifies how participants viewed the potential for the developers of AI in Defence to have a direct influence on how the system acts in the process of its functionality.

3.1.3 Need for human input and monitoring

A recurrent theme that emerged across all the focus groups was the need to retain an element of human decision-making within AI for Defence. For some participants, they viewed this as a confirmatory process, so that AI was not reaching a final decision without direct input from a human operative.

“Perhaps AI accompanied with somebody who has an eye for these things is the best way to go and I apply that belief to Defence.” (FG1)

“All of these things are affecting human beings eventually, aren’t they? So, you’d like to think there’s a human involvement in decisions somewhere along the line.” (FG4)

“I think there needs to be an evaluation process involving humans, what decisions have been made.” (FG4)

“I worry that by kind of taken human beings out of the equation and letting these AI systems make decisions almost and influence policy or big decisions in terms of the military and Defence” (FG2)

Other participants found the potential for AI to make critical decisions that could lead to catastrophic outcomes without human intervention or input as being a source of risk and fear.

“It is really scary for somebody to drop a nuclear bomb and have no human intervention to stop it once it’s set. That is more scary. Our everyday stuff doesn’t kill somebody, AI for Defence could and they could get it wrong.” (FG4)

“Just the fact that it hopefully won't go rogue. That’s the only thing really.” (FG4)

This was one of the key concerns for participants and was seen as a critical barrier to establishing a level of trust in AI when being used in a Defence context. Throughout the focus groups, participants often reiterated the need for the human element, particularly when it came to critical decision-making processes, to be retained as part of any AI in Defence.

3.1.4 The need for accountability

One of the main elements that the focus group discussions returned to repeatedly and with emphasis was the topic of accountability. For many participants, they wanted reassurance that someone would or could be held accountable for the actions AI makes in the context of Defence settings.

“Where is the accountability? So, if the AI has decided we’re going to bomb this compound because of X, Y and Z factors, if it turned out that’s actually a school is it the programmer who is responsible for that? Is it the person who was overseeing the drone, the people who were in the room?” (FG2)

Participants were keen to establish where accountability would lie in the context of an error in decision-making and target selection, particularly in the instance where there was a potential for collateral damage to a non-military target. In the following quote, a participant questioned who would be accountable in an instance where AI is charged with deciding about the use of lethal force. Similarly, there is an aligned concern that these decisions may not be fair and reasonable.

“If we’re handing over literally life or death decisions to a machine essentially, that again, right back at the start has been programmed by a human, where is the accountability and how can we be sure that they’re making just and fair and reasonable decisions?” (FG2)

And finally, participants focused on how background programming could influence who takes the responsibility for the outcome of decisions being made by AI.

“How is responsibility and learning programmed into it, who takes responsibility and all that kind of thing that comes” (FG5)

Participants across all focus groups had reservations about who accountability is attributed to when it comes to decisions being made by AI in a Defence setting. In most cases, participants are trying to gain some clarification as to where accountability would lie in these instances.

3.2 Theme two: the ethical use of AI in Defence

This theme explores views of how and when AI in Defence should be used, if at all. The participants expressed strong views on how and when AI in Defence should be used, with non-lethal applications being favoured as more acceptable than lethal applications. The first sub-theme explores participants’ perceptions of what applications could be acceptable for AI in Defence.

3.2.1 Acceptable uses of AI in Defence

The participants talked about general settings and situations in which they viewed the use of AI in Defence as being acceptable. There was a clear demarcation here, and this mirrors the distinction between big data applications and those which require aspects of human decision-making highlighted in Schepman and Rodway (2022). Most participants viewed appropriate uses of AI in Defence to include mundane tasks, tasks that improve efficiency, tasks that might preserve lives, and tasks that can free up human operators to do other things.

“I think kind of gathering data, processing it, analysing it, presenting it can be great for machine learning and AI to do (FG2)

“The mundane tasks the more time that can be spent improving other areas of Defence which overall improves Defence, but I would say my big no-no is when there’s no human element (FG1)

“I don’t really see it as more of something that’s being deployed, I see it as more of an efficiency thing that increases running and productivity.” (FG1)

One thing that these applications have in common is the lack of any involvement in critical decision-making processes, focusing more directly on processes and operations that are linked to more routine, back-office tasks.

Participants also viewed the capacity to keep people safe and provide protection against external threats as a key benefit of using AI in Defence settings. On one level, they spoke about protecting frontline staff and soldiers against harm, particularly where autonomous drones could be used in operations, particularly bomb disposal activities or scouting operations in warzone settings.

“Obviously, the main idea with something like that is to keep people safe when it comes to bomb disposal but like you were saying about drones and going into areas, they could definitely be AI.” (FG2)

At another level, there was the potential for AI to be used in a very clearly defined defensive posture, where it offered clear protection from external threats, and focused on protection of everyone, not just frontline military staff.

“In terms of Defence for us as civilians, I presume we’re talking about things like early warning systems for if there’s nuclear missiles coming towards the UK there’ll be a system in place that will identify that, alert everyone and who knows how it actually happens but again, if it’s correct then great, defend, do what they need to do.” (FG2)

“For me, what comes to my mind when we talk about AI in Defence is safety, protection. Why do we view this machine. It’s for safety, for everyone to be safe. You can be kept secure from the enemies.” (FG1)

3.2.2 Unacceptable uses of AI in Defence

Participants were very clear about where AI should not be used in the context of Defence. Most objections are related to situations where there was no human involvement in situations where critical decision-making needed to be made. Many participants were uneasy with the use of AI as a sole mechanism for making decisions and did not want the ‘human element’ to be completely removed from such situations.

“When it comes to execution and decision-making that’s when it needs to be handed off to a human or a group of people rather than executive decisions being made” (FG2)

“I would say my big no-no is when there’s no human element.” (FG1)

Another concern focused on a moral concern that AI being used in a Defence setting would not take into consideration the potential for collateral damage and would see humans as expendable. This is captured in the following extract from one of the participants:

“If AI was used on civilians that don’t have anything to do with the tensions could be saved, I think that’s like AI done well but if that AI somehow neglects to think of the people… If AI uses people as collateral that’s when I draw a red line.” (FG1)

3.3 Theme three: trust in the system vs. trust in the organisation

Throughout the focus groups, trust was a recurrent theme that participants focused on. On the one hand, participants discussed trust being placed directly in the machine. On the other hand, participants discussed trust in relation to the organisations managing AI in Defence. Primarily, participants found it hard to establish trust with a machine and viewed an element of human interaction as essential for developing trust.

“I think with everything it’s just hard to trust a computer, you need that personal touch and personal ability to think.” (FG1)

“The first thing that comes to mind is can we trust these things” (FG5)

“I don’t think I trust it 100%, I think it’s easier to build machines and get machines to do things, but, actually, it probably does need more of a human side of things as well. I don’t trust it a lot but I like the idea of making things easier, but I don’t necessarily trust it.” (FG1)

Participants were also keen to discuss trust in the military as part of the process for establishing trust in AI in Defence. Again, the notion of bias came into the discussion, with one participant highlighting that AI is only as good as the person who has created it.

“I do trust the military to some degree. It depends if you define the military as a state or a group that’s out there…” (FG5)

“In terms of how much I trust or distrust the military, I think with any technology you’re building it’s only as good as the people creating it as it is based off human consciousness, and it is modelled off of humans.” (FG1)

3.3.1 Possibility for errors

One sub-theme that was linked to trust in AI was the potential for errors to occur in the functioning of the system. These errors were in turn viewed as having the potential to lead to catastrophic loss of life or wider conflict to arise. For example, one participant stated:

“If you leave it to the machine to make up its own mind then I foresee trouble ahead.” (FG5)

There is a real sense of foreboding here from the participant, and this rhetoric was shared by a variety of other participants. For example, this participant questioned the development process behind such system, and suggested that the programming could be flawed in some way so that the system does not fully understand how to respond. Again, the participant believes that this could lead to things going ‘horribly wrong’.

“Sometimes they design these things, and they don’t necessarily think everything through, you know, they don’t think every single option through so there could be quite a large room for error and it could go horribly wrong.” (FG2)

In the extract below, there is a clear sense of fear that AI could start to make decisions by itself, resulting in the potential for it to ‘go rogue’, again leading to catastrophic consequences. Several participants mentioned the potential for such errors to lead to unplanned conflict and even nuclear war.

“Just the fact that it hopefully won't go rogue. That’s the only thing really.” (FG4)

“If it’s incorrect we’ve now inadvertently started a nuclear war based on a machine and based on a programme that was written” (FG2)

3.4 Theme four: information gathering for AI in Defence

Participants explored ways in which they gathered information related to developments surrounding AI and AI in Defence. Throughout the focus groups, there was a great deal of discussion around the role the mass media has in the portrayal of AI in Defence. The first sub-theme deals with a deep-routed mistrust of the mass media and its capacity to communicate the facts and truth surrounding AI in Defence.

3.4.1 The mass media is not trusted to present the truth

Participants across all focus groups viewed the capacity of the mass media to present clear information about AI in Defence as being limited. Participants highlighted that the mass media was focused more on exaggeration and scaremongering and were distrusting of anything the media presented on the topic of AI in Defence. This is an interesting perspective, and demonstrates that, at least to some extent, participants were curating their sources of information on the topic and had a critical awareness that not everything presented by the mass media as factual and evidence-based.

“A lot of people want to find fault in these sorts of things, it’s potentially the case that any change like that people want to find a fault with it, and I think the media definitely make that slightly worse.” (FG1)

“If someone was to report back to the public, how can you trust that source because so many different media and they’ve all got an agenda.” (FG5)

“I personally don’t trust anything. When it comes to war and we’re talking about these kinds of things, anything that the media puts out there I am very sceptical of in terms of what’s the narrative around it.” (FG2)

3.4.2 The effect of narratives on perceptions of AI in Defence

Many participants talked about the influence of the entertainment media, in the form of films and television programmes, shaping the narrative of AI in Defence. In the first instance, participants were keen to try to establish the fictional portrayal of AI in Defence within films and television programmes.

“We get our opinions from what we get in films and what we read in books.” (FG2)

“We all see these things on the TV and think it doesn’t really matter, it’s only a film or a story but there’s a modicum of truth in them all and the preciseness of machines.” (FG5)

“Half of this stuff though I don’t know if it’s actually real or I’ve just seen it in a film or a TV show and stuff like that.” (FG2)

“I know it sounds weird but you look at Back to the Future or some of the James Bond old films” (FG4)

However, a consequence of this type of media consumption appears to be a blurring between the reality and fictional elements of AI in Defence. Some participants found it hard to establish if the things they had seen in films were real, or if they were linked to actual applications that were already in existence.

A second aspect to this theme is an underlying apocalyptic narrative that has been portrayed in several popular film genres. This is particularly evident in the last extract for this section, where one participant articulates the potential for AI to be developed to a point where it attempts to destroy mankind.

“If you see something on the TV or someone mentions it your mind automatically thinks about that’s real then and what will be the consequences.” (FG5)

“Film is like fantasy but in a very strange way there is some kind of learning thing that is coming from films, it looks like a chance there is possibilities so it’s a bit scary that 20 years later this is going to be the case.” (FG4)

“You don’t want AI to take over the whole world, do you, like in some of the films, the humans having to fight back. That’s the scenarios they have in your head, you’ve seen the films, you don’t want that.” (FG4)

3.4.3 The information paradox

On the one hand, participants wanted to be able to access information, and felt that some of the information they had found to be hard to digest of very technical. However, in the same regard, participants also expressed the view that some information should be restricted, and the public did not need to or should not know everything, especially where such information could be used to undermine strategic Defence.

For several participants, they discussed being able to digest information on AI in Defence in a more accessible and less technical format. Many participants expressed the view that AI was a complex topic and was often too technical or complex for them to comprehend very easily.

“I think more generally it’s probably in the news that you would hear sort of I guess updates which are more palatable to the general public or make more sense or are more relevant.” (FG1)

“From my side I probably hear about most developments, there’s probably a couple of forums I follow on Reddit…I think it is too technical for me.” (FG1)

“AI use in Defence terms because extremely quickly you end up talking about quite sophisticated concepts beyond the average person and the application of it and the implication of those applications.” (FG4)

Of note is a reliance on public forums and social media to obtain information about current and proposed developments in AI. This is worrying, as such forums have the potential to spread misinformation and be breeding ground for conspiracy theories.

Other participants were more sceptical when it comes to the information they receive and how they receive it. One participant commented they felt the public were being given limited, out-of-date information.

“Whatever gets out into the public domain is five to ten years behind what is actually happening anyway, so it feels indirect and probably a little bit behind the curve as well.” (FG1)

This sentiment also feeds into another aspect of information release, where participants felt that they were only likely to get information about the applications of AI in Defence via indirect sources, such as leaks in sensitive information. Such a narrative builds into a clear conspiratorial thought process, with participants establishing a position that much of the information they receive on AI in Defence is either out of date or its release is being controlled.

“I think the only kind of plausible way I can see the public being kept informed of what the military is up to is through leaks rather than something like actively done by the military it’s just someone who has decided that no, the public need this information.” (FG2)

Although participants were very keen to get information about AI in Defence applications and developments, they also established a limit to what the public needed to know. Many participants established a level at which they would see information not being shared with the public, particularly where the release of such information would threaten strategic Defence operations.

“I think the MoD disclose much of their operations to the public for security reasons, for people not to be scared or for people not to panic.” (FG1)

“It’s a bit of a double-edged sword, isn’t it because in a way for it to be effective it needs to be secretive because if we let everyone know what AI, we’re using it kind of then loses its impact and it’s pointless to have it if the enemy is aware of the systems, you have…. “(FG2)

“If you want to defend your country you don’t want the other side knowing everything, that means keeping it from us as well.” (FG4)

Participants were clear that information should be retained or reserved in instances where its release could lead to an opposing faction knowing what is being developed.

3.4.4 The need for better and more accessible information

Several participants raised the issue of ‘better’ or more accurate and accessible information being communicated to the public, which could, in turn, generate higher acceptance and a clearer understanding of current developments in AI in Defence. Participants saw this as part of a democratic process, where being informed about the developments within this field allows them to exert some control over future applications.

“I think if you really want that kind of information to be interested by the public you need to have a lot of very strong education to the public why Defence is important […] you need public education, but do you know how to do it appropriately, so we are receptive about this thing.” (FG4)

“A democracy is supposed to hold us in check to, but if we didn’t know the operations of our military beyond official secrets then we wouldn’t have the capacity of doing that, would we? And other parts of the world have found that out.” (FG5)

“We are a democracy and we’re supposed to have some control over what our government does by means of an informed democracy, we have to know stuff before we vote, don’t we? I mean if we’re ignorant then the vote isn’t as valuable. So, although we can’t know about military secrets, we can know the greater ethical argument about things surely.” (FG5)

Participants were also keen to suggest potential mechanisms for getting the information about AI in Defence out to the public, with one suggesting that the government should establish some form of conference to highlight the current developments surrounding AI in Defence.

“I think maybe at some stage the government need to have a very open public conference and really let the public know.” (FG4)

Another participant again highlighted the potential for the public to have a say in the developments being made within AI in Defence, favouring a public consultation process.

“If we are massively outraged and demand they stop it and they’re like well no, we’re the military, we’re just gonna do what we want…unless there’s gonna be some kind of public consultation there’s not really a lot of point, I guess.” (FG2)

4 Discussion

The aim of the current study was to provide an in-depth exploration of public attitudes and perceptions towards AI in Defence. The work presents the first clear attempt to do so, and provides an initial starting point from which additional work can now be conducted. The following section explores the main themes alongside existing research in the field.

4.1 Humans within the system are important… but can also introduce bias

Although the participants were confident about their views and all views were well justified, their views on the involvement of humans in AI in Defence settings seemed to be somewhat paradoxical in nature. On the one hand, it was clear that humans should have a supervisory or oversight role for AI in Defence, but, on the other hand, humans could also introduce bias in machine decision-making. This calls for a clearer understanding of the role of humans with delineated boundaries and roles as well as safeguards in the decision-making process. In addition, the participants viewed AI as a ‘thing’ that functioned within a set of parameters that were determined by the human operator or creator. In this regard, they missed the actual purpose of AI, particularly in a Defence setting. Reviewing the definition presented by the Defense Innovation Board (2019), AI in a Defence setting should have the capacity to function ‘without significant human oversight’ and be capable of learning ‘from its experience and improve performance’. Both aspects are clearly in opposition to perspectives held by the participants interviewed, showing how formally held definitions may have a limited impact on the perceptions of non-experts.

It is not surprising that many of the participants had contradictory or inaccurate representations of what AI in Defence is and how it can or should function. Creating an accurate definition for AI has proven problematic and has been an on-going process that additional research should aim to overcome (Wang 2019). Although this is not an issue for the theoretical study of AI (Wang 2019), it does raise clear concerns when communicating the nature and purpose of AI, particularly in Defence settings.

4.2 Perceptions of trust influence views on AI in Defence

The topic of trust in AI for Defence was also raised repeatedly by the participants, particularly trust in the system itself. The issue of trust in AI has been a contentious issue, with some researchers disputing the notion that AI itself can be considered trustworthy (Ryan 2020). Instead, it is argued that trust resides in the individuals and organisations that develop AI applications. Indeed, Bryson (2018) noted that “AI is not a thing to be trusted. It is a set of software development techniques by which we should be increasing the trustworthiness of our institutions and ourselves”, whilst Ryan (2020) argued that AI should be viewed more as being ‘reliable’ rather than ‘trustworthy’. This was highlighted by the focus group participants who mentioned the potential for AI to malfunction or ‘go rogue’, which is one potential barrier to acceptance.

Contrary to Bryson (2018) and Ryan (2020), other researchers have focused directly on the role of trust on the acceptance of AI. For example, the lack of trust in AI was one of the main barriers to humans obtaining the full benefits that AI has to offer (Gillath et al. 2021) and, in turn, one of the primary reasons for the lack of trust in AI is misunderstanding or lack of understanding (Gillath et al. 2021). Participants may be unable to trust AI in Defence, because they do not have a clear understanding of what it is. Another driver for the lack of trust in AI may be the fear of AI (Gillath et al. 2021), which that was frequently conveyed by the participants when discussing AI in Defence, particularly in the instance where errors and issues of reliability could occur. Another potential source of lack of trust in AI systems could be linked to the functioning of the system. For many individuals, the inner workings of machine learning and neural networks lack concreteness and remain unexplained and transparent. Therefore, a lack of trust arises, because it is difficult to trust a system where its key functions and the resultant decisions are beyond the grasp of most members of the public (Ferrario and Loi 2021; Ryan 2020).

Lack of trust is a key determinant in the integration of AI systems into teams (Gillath et al. 2021; Groom and Nass 2007), but also the wider inception of newer technologies (Jeffries and Reed 2002). Bolstering trust can be achieved via the reduction in perceptions of risk for the current technology under scrutiny, a strategy that could be beneficial for AI in Defence (Gillath et al. 2021; McKnight et al. 2002).

It appears that there could be a relationship between trust in AI for Defence, on the one hand, and perceptions of risk and the reliability of AI in Defence, on the other hand. An assessment of this plausible link would be of critical importance for further research, particularly for exploring pathways to enhancing public trust and therefore enhancing acceptance of AI in Defence.

4.3 Ethical concerns and the use of AI in Defence

Participants expressed clear boundaries between acceptable and unacceptable uses of AI in Defence. They were in favour of AI in Defence being used for non-lethal activities that supported frontline service people and aspects of back-office logistics. Many saw the benefit of using AI controlled drones to access areas that would otherwise put a human operator at risk of serious harm. Participants were also keen to see AI in Defence being deployed in instances where safety and protection came at the forefront of the functioning, such as early warning systems. These applications for AI in Defence are included under the ‘Sustainment and Support’ and ‘Adversarial and Non-Kinetic’ categories offered by Taddeo et al. (2021). However, participants were also keen to stress that these functions should include a level of oversight from a human operator, and were uncomfortable with AI systems being left to carry out operations unchecked. At the other end of the spectrum, those actions viewed as being unacceptable uses of AI in Defence were those that involved the use of lethal force, such as Lethal Autonomous Weapon systems (LAWs; Taddeo et al. 2021). Such a dichotomy in comfortableness of applications for AI has also been viewed in previous research that explored general applications of AI. For example, Schepman and Rodway (2020) noted that participants were more comfortable and perceived lower risk with uses of AI that involved big data analysis or automation (e.g., reducing fraud, helping detect life on other planets), and had no direct potential to replace critical human decision-making tasks. In contrast, those activities that were viewed as involving an appreciable level of human decision-making (e.g., diagnosis of illness, performing surgical procedures, and driving a car) were viewed as being higher in terms of risk and participants were less comfortable with them.

4.4 Scarcity of reliable information means unreliable sources are used

For the most part, the participants who took part in the focus group sessions had little prior knowledge of the potential applications for AI in Defence. There was some acknowledgement of the use of AI in aspects, such as unmanned vehicles, the use of drones, and applications in training, but additional details were limited. There was a rich discussion around accessing reliable information about the potential applications of AI in Defence with each of the focus group sessions. There was consensus among participants on the lack of trust in the mass media to present clear and accurate information on the range of applications for AI in Defence. However, participants also expressed confusion about where the best places to find such information were, often citing the complexity of the topic and the lack of more manageable, easily-to-understand material as key issues. This was problematic for an additional reason: the potential for misinformation to bias public perceptions. Several participants suggested that they relied on Internet forums for information surrounding aspects related to AI in Defence. Such forums present an ideal place for conspiracy theories and misinformation to develop (Allcott et al. 2019; Allcott and Gentzkow 2017; Wu et al. 2019). As a result, many members of the public could be basing their knowledge of current development surrounding AI in Defence of inaccurate information. Other participants pointed to the belief that much of the information they received was out-of-date, and that the Defence applications they had heard about were already in use, whilst some suggested the only way they would get to hear about such developments would be through leaks in official information.

As noted by Morgan et al. (2020), if the public is to be a key driving force in the acceptance of any technology, they must be clearly and carefully informed about such developments. The participants interviewed acknowledged that a balance of being informed and retaining some secrecy would be important. They understood that being informed could potentially place sensitive information into the hands of an adversarial nation, but at the same time, they were keen to be kept informed. The need for this information to be presented in an accessible format would also appear to be a key factor in the communication process, particularly as many participants found information on the topic to be difficult to digest.

4.5 Implications

These findings have immediate practical implications for those tasked with developing informed policy, responsible governance, and responsive assurance for the public in terms of the use of AI in Defence settings. The work offers a pathway to inform the public in a way that can help to make any information on future developments more accessible and to dilute likely misinformation on the use of AI in Defence. Future policy should focus on ensuring that the public’s key concerns are allayed as far as possible and establishing a core framework for creating trust in AI for Defence based on the positive attitudes and perceptions revealed in this study. Tackling misinformation is a clear priority, since misinformation can become hard to overcome when attitudes become more firmly rooted. Therefore, effective strategies to tackle and counteract misinformation should be developed, especially as developments and applications in AI are multiplying.

4.5.1 Narratives and AI in Defence

Perhaps, one of the most interesting aspects related to information gathering and AI in Defence was the lack of direct reference to some of the key narratives that surround general aspects of AI. For instance, two common narratives that are linked to AI are associated with either a dystopian view in which humans are destroyed or enslaved by self-aware AI, or where humankind is saved from such a future, creating an over-trusting, utopian view of AI (Cave et al. 2018, 2019; Cave and ÓhÉigeartaigh 2019). Although some participants mentioned the notion of AI ‘going rogue’, and the potential for AI to inadvertently start armed conflicts, these narratives did not feature often in the focus group sessions. This raises a question about how much influence such narratives do have on public perceptions of AI and AI in Defence, but the fact that some participants did mention some aspects of these mean they cannot be ignored. As Cave et al (2018) noted, the potential to reshape narratives related to AI focuses more on how individuals create such narratives rather than looking at the content of them directly. They suggested that it is important to create ‘spaces’ where new stories can be allowed to emerge via open dialogue and engagement. Such a strategy has a direct link to the discussions held with the current focus group participants who also suggested the use of public forums and conferences to help create new narratives that enhance the public’s view of AI in Defence.

4.6 Limitations and suggestions for further research

The findings ought to be interpreted considering the limitations of the study. As the first exploration of its kind, this study was necessarily qualitative and small scale. Although this yielded power in terms of depth of investigation of perceptions, it also limited the scope and number of participants that could be realistically sampled. This, in turn, precluded comparisons between age groups. Future research with a larger group that employs sampling focused on a broader range of socio-demographic variables (e.g., education, occupation and relevance to digital technology, socioeconomic status, etc.) would be useful to build on our findings.

As we start to develop evidence on public perceptions of AI in Defence, we can also start to integrate this into a conceptual model of how these perceptions are shaped and how they evolve over time. This would require more nuanced evidence exploring the current narratives that surround a particular concept identified in this study, which future research should focus on. Additional types of evidence, including longitudinal surveys and even experiments, would then enable us to synthesise the evidence with the ultimate aim to inform interventions to adjust attitudes, address mistrust in relation to the use of AI in Defence, and lead to better, more well-rounded communication strategies.

4.7 Conclusions

The current study presents the first attempt to explore public attitudes and perceptions towards AI in Defence. The emergent understanding is critical as, in a democratic society, the public can directly and indirectly influence developments in the field of AI in Defence. Many participants made assumptions about how AI is currently being used in a Defence setting; these assumptions were generally driven by inaccurate narratives and conspiracy theories and (mis)information presented on social media. As this research demonstrates, the public currently holds a range of attitudes and perceptions on the use of AI in Defence, even where such AI developments are not yet a reality. We cannot change public discourse and narratives but knowing that these are in place and feed into perceptions and understanding can help to think about further communication strategies. Points of contention and misunderstanding are crucial to address to maintain trust in AI and its uses in Defence. Thus, understanding relevant public attitudes and perceptions can help to inform areas of potential misunderstanding and misinformation, whilst also helping to allay the public’s current fears and anxieties through better communication.