Introduction

Since the domain of robot ethics took off in early 2005 the discussions have been largely, but not entirely, directed towards the use of military robots, healthcare robots, robots in elder care, self-driving cars, and ethical robots (i.e. robots with ethical reasoning capabilities also referred to as machine ethics) (Arkin 2009; Hevelke and Nida-Rümelin 2015; Lin 2016; Lin et al. 2011; Lokhorst and van den Hoven 2011; Moor 2006; Sharkey 2014; Noel Sharkey 2010; Singer 2009; Sparrow and Sparrow 2006; Sullins 2011; Vallor 2011; van Wynsberghe 2015). What has chiefly been overlooked in the robot ethics community is the use of robots for humanitarian purposes. In particular, one application of these robots has gained prominence due to their pervasive presence; drones used in humanitarian action.

To be sure, this is not a science fiction discussion of the potential future application of drones; the technology has been deployed in this sector for a substantial period of time. The first drones deployed in this sector were used for surveillance in peacekeeping missions in the Democratic Republic of Congo in 2006 (Karlsrud and Rosén 2013). In the response to the Haiti Earthquake in 2010, UNOSATFootnote 1 used drones to complement satellite imagery and to map internally displaced populations. In 2013, the first cargo drones were used to deliver medical supplies (Austin Choi-Fitzpatrick et al. 2016). More recently, in 2019 the South Pacific Nation of Vanautu, with the support of UNICEF, is using drones to deliver the vaccines as part of their ‘childhood vaccine program’.Footnote 2 Given the variety of uses, drones have attracted increasing attention from the media, the UN, and humanitarian NGOs. Drones are recognized for both the positive and negative outcomes that may accompany their deployment.

The use of drones for humanitarian aid is seen as a wonderful opportunity to harness the power of innovation in providing disaster relief to those in need (Austin Choi-Fitzpatrick et al. 2016; Auston Choi-Fitzpatrick 2014; Bart Custers 2016; Haidari et al. 2016; Jack Chow 2012; Meier 2014; Momont 2014). Specifically, drones promise to solve the problem of access where security and protection problems are a major concern. On the other hand, there are those who believe that the use of drones in humanitarian contexts disenfranchises communities and local efforts, and adds to a series of remote management and data collection or processing dilemmas that humanitarian organizations are not equipped to handle (Donini and Maxwell 2014; Lichtman and Nair 2015; Raymond et al. 2012; Sandvik and Raymond 2017; Sandvik and Lohne 2014). In other words, relying on drones exacerbates an already difficult situation.

This paper is meant to provide a more nuanced analysis to the question of ‘should’ we use drones in humanitarian contexts. To do this, we suggest that the strength of the humanitarian principles approach to answer questions of aid provision can be complimented by a technology-facing approach, namely that of robot ethics. We begin by providing a brief history of the drone in humanitarian contexts. We continue with a discussion of humanitarian ethics and robot ethics and follow with a case study to assess a humanitarian drones’ application. Our focus for this assessment is to take a closer look at the ‘technologizing’ of humanitarian care and to do this through the lens of robot ethics, namely a focus on human–robot interactions. For humanitarian workers we raise concerns over the loss of contextual understanding culminating in the de-skilling of workers. For the beneficiary we raise three concerns associated with the dignity of this group, in particular: a threat to the principle of humanity by reducing human–human interactions; a threat to dignity through a lack of informational transparency; and, a threat to dignity by failing to account for the physiological and behavioral impacts of the drone on human actors. In so doing we hope to contribute to the growing field of robot ethics as well as raising awareness of the host of ethical issues and the lack of attention directed towards the use of drones in humanitarian action.

A short history of the drone in humanitarian action

In December 2006 the UN, funded by the EU, launched its first drone for use in the Democratic Republic of the Congo.Footnote 3 Earlier applications also include aerial surveillance: in the Central African Republic in 2008 by the EUFOR and the UN Mission; and in the Democratic Republic of the Congo in 2013 by the UN Organisation Stabilisation Mission. Drones have also been used in the US for search and rescue in: “the 2007 California wildfires, the 2010 Haitian earthquake, the 2011 nuclear disaster in Japan and the 2013 typhoon in the Philippines” (Kristin Sandvik and Lohne 2014, p. 8).

Drones first entered the humanitarian sector as a surveillance technology, and a novelty item used by only a handful of actors. Since that time, the technology has become far more accessible: it is relatively easy to use, and relatively inexpensive, with off the shelf products available for around US $200 or less. Although the humanitarian sector has experienced a surge in the use of drones in parallel to the consumer industry an estimate of the number of drones in operation today cannot be found in current literature. This may be so for a variety of reasons among which there is no requirement to register the use of drones with an overarching regulating body.

According to a 2016 Swiss Foundation for Mine Action (FSD report on drones and related interviews, the most promising uses of drones in the humanitarian sector include: “mapping, delivery of essential items to remote or hard-to-access locations, supporting damage assessments, increasing situational awareness, and monitoring changes” (Soesilo et al. 2016, p. 7). More specifically, “drones have been used to speed up or increase the quality of localized damage assessments, planning for disaster risk reduction, planning of humanitarian interventions, improving camps and shelter units and in the delivery of small essential items. Drones also show great potential in tactical settings as support for the work of search and rescue teams and field teams” (Soesilo et al. 2016, p. 19).

While initially, drones were primarily used for mapping, monitoring and assessment, along with the commercial development of cargo drones, applications in humanitarian logistics are gaining attention, particularly concerning the last mile problem (Haidari et al. 2016). Primary application domains are medical deliveries, with drones being used to bridge the last mile to bring vaccines, blood supplies or HIV diagnostic kits to suffering populations in countries like Lesotho, Malawi and Rwanda. The core argument for this use of drones is that they can provide a cost advantage in reaching low-volume remote locations that often represent an expensive component of humanitarian logistics networks. Further, that they can provide access in conflict areas where deliveries pose a threat to aid workers.

Humanitarian drones as part of the pending robot revolution

Although a universal definition of a robot is still somewhat controversial, for the purpose of our research here we will consider that drones fall within the broad domain of robots; they may be considered ‘service robots’ according to the International Organization for Standardization: “performs useful tasks for humans or equipment excluding industrial automation applications”. (ISO 8373).Footnote 4 In line with this definition they operate with varying degrees of autonomy.

To provide a more specific definition of a drone we consider that: “Drones are aircraft which are operated with no pilot on board” (Maria Juul 2015). Drones are also referred to as: Unmanned Aerial Vehicles (UAV), Unmanned Aircraft Systems (UAS), or by the European Union as Remotely Piloted Aircraft Systems (RPAS). The change in terminology is often attributed to the notoriety of the term ‘drones’ after its pervasive use in warfare. Of late, UN organizations and NGOs have returned to the label ‘drone’ for simplicity’s sake as well as for shifting the perception of the technology away from exclusive military applications. As such, we will use the word drone for our work in this paper to refer to ‘human-operated humanitarian drones’.

Drone innovation is happening so rapidly that it is difficult to know whether to classify the drone according to application, use, or technical features. To date, drones have been classified according to technical features as either fixed-wing, multi-rotor, or hybrid drones (Soesilo et al. 2016). These types then determine the possible capabilities of the drone which also determines the environmental conditions under which they can be used. Classifying drones according to use case and/or application may include: agricultural monitoring for development and anti-famine efforts, environmental monitoring, educational applications, spatial modeling of landmine fields and other hazardous areas, conflict monitoring, cadastral mapping and other land-rights assessments, resilience and pre-disaster planning and mapping, post-disaster mapping, media and advocacy.Footnote 5 In general, the choice for a particular drone type “is guided by application, environmental conditions, organizational needs, and associated costs” (Soesilo et al. 2016, p. 17).

The ethics of drones in the humanitarian space

In the remainder of the paper we shift our focus away from an historical or technical presentation of the drone in humanitarian action towards a look at the ethical frameworks available for an evaluation of said technology. Following this we propose a case study analysis to show the utility of robot ethics for the evaluation of drones in the humanitarian space.

Humanitarian ethics

Resulting from the wave of national and international non-governmental organizations (NGOs) that were established in response to the atrocities of the First World War and Second World War, came the development of the United Nations (UN) in 1945. The UN marked the significance of humanitarian action as a form of international relations. Since this time numerous UN agencies have specified stakeholders and areas of focus for humanitarian actors, e.g. UNICEF for children or UNHCR for refugees.

In practical terms, humanitarian action refers to “the material, political, and military responses—by the humanitarian arms of the UN, international NGOs, and states—to particular invocations of humanitarian suffering” (Sandvik and Lohne 2014, p. 4). It is “a compassionate response to extreme and particular forms of suffering arising from organized human violence and natural disaster” (Slim 2015, p. 1). It is about “protecting, respecting, and saving human life (Slim 2015, p. 2).

To be sure, humanitarian action is not without its own difficulties. Although the goal is to provide aid to those in need, in order to achieve this humanitarians often put themselves in physical danger and/or they must interact or negotiate with totalitarian governments. Additionally, they are faced with challenging decisions about how to prioritize limited resources and/or decisions about when and from whom to accept funding. This may happen at the individual and/or the organizational level. Moreover, all of this happens against a backdrop of funding cuts and increased competition between humanitarian organizations (Comes and Adrot 2016), where a ‘market logic’ fails because the beneficiaries of humanitarian aid are not funding the organizations that provide aid (Salvadó et al. 2015). In the course of the last decades a humanitarian ethics has developed as a guide for making ‘good’ decisions in difficult and trying times (Slim 2015).

The buttress for humanitarian ethics resides in humanitarian standards such as the Geneva Conventions of 1949 (and the additions of 1997 and 2007), UN laws on weaponry, the Refugee Convention and the new field of international law on disaster response to name a few (Slim 2015, pp. 4–5). Of great significance are the Fundamental Principles of the Red Cross and Crescent SocietyFootnote 6 agreed on in 1965 which are commonly recognized as the humanitarian principles “by the UN Security Council, UN General Assembly, regional organizations, governments and international organizations” (Slim 2015, p. 5). These four principles are: humanity, impartiality, neutrality, and independence.Footnote 7 In short, they indicate that: “human suffering must be addressed wherever it is found” (humanity); “humanitarian actors must not take sides…” (neutrality); “humanitarian action must be carried out on the basis of need alone…” (impartiality); “humanitarian action must be autonomous from political, economic, or military objectives…” (independence) (Bagshaw 2012).

Humanitarian ethics is neither a pre-packaged ethical theory nor a precise ethical framework to apply in a given scenario. It closely resembles care ethics in that it focuses on the relational element of providing good care between care giver (humanitarian actor) and care receiver (humanitarian beneficiary). It also draws on the tradition of virtue ethics in that it engages in a critical reflection of the good humanitarian actor and the virtues necessary for this.Footnote 8 Additionally, it retains elements of a deontological or principled approach by setting out moral imperatives to guide action. The difficulty then is understanding what to do when these principles conflict with one another, for example what to do when faced with a conflict between respecting humanity and providing care on the one hand and maintaining impartiality on the other. The focus of this paper, however, is not to engage in a reflection of how to balance between principles; rather, it is about how to provide an ethical evaluation of a new technology introduced into the humanitarian space.

Practical advice concerning the use of drones in humanitarian action

To be sure, there is a short but growing list of resources available to the humanitarian sector for providing practical advice on drone deployment. UN-OCHA (The United Nations Office for the Coordination of Humanitarian Affairs) published a report in 2014 on the use of drones outlining the associated prima facie regulatory and legal issues (Gilman and Easton 2014). The main conclusion from that report is recommending drones for use in natural disasters only but not for conflict zones. Such advice does little to dissuade the ‘nontraditional’ actor who is not bound to follow guidelines or regulations in the humanitarian space in the same way as an established NGO. Equally, this provides little guidance for policy makers and drone operators on how to implement responsibly. More recently, FSD published a report called “Drones in Humanitarian Action”, the first of its kind to document: interviews with stakeholders using drones, case study of drone usages, and an overview of resources available for the community (Soesilo et al. 2016). While the report shows strength in its systematic overview, its focus is on a description of the field of drones in humanitarian action rather than a critical account of the ethical issues related to the technology.

The Humanitarian UAV Network—UAViators—has developed a Code of Conduct (see UAViators.org/docs)Footnote 9 and launched the Global Drones Regulations Database in 2014 as an open wiki to collect all regulations on drones (www.droneregulations.info). What such a database and/or dynamically evolving code of conduct and practices do not facilitate is a way in which one can scrutinize the principles and values used in the formulation of laws and regulations. Such a debate often takes place in the academic domain and this paper is intended to initiate said debate. Moreover, the academic domain, in so far as it is not part of the humanitarian sector and its power plays, also offers a host of transparent methods and methodologies that underpin research.Footnote 10

In said domain, researcher Robin Murphy has written a book on disaster robotics as a guide to the theory and practice (Robin Murphy 2014). Publications concerning the risk of weaponized drones in humanitarian action and/or the legitimizing of drones in the military through their use in humanitarianism have poignantly highlighted the rhetoric of the drone (Kristin Sandvik and Lohne 2014). More recently academics Sandvik and Jumbert have co-edited a book entitled “The Good Drone” (Sandvik and Jumbert 2016). This book is a reflection of when and under what conditions a drone is considered ‘good’ and addresses a variety of applications such as military, humanitarian, agriculture, and environmental protection (Sandvik and Jumbert 2016, p. 2).

Other works of a similar kind can be found that discuss this concept of “the good drone” by discussing application areas (Auston Choi-Fitzpatrick 2014; Momont 2014) or the intensions of users (Austin Choi-Fitzpatrick et al. 2016). However, what all such works are missing is a focus on the shift in how humanitarian care is provided as a result of the robot’s introduction. Put differently, a look at how the drone will interact with the humans involved and what this interaction means for the provision of humanitarian care. The crux of the issue, or the hypothesis as we see it, is that there are specific concerns raised by this technology as opposed to another, say the personal computer that need to be addressed. For this reason, we suggest that robot ethics, with its focus on the ethical issues stemming from human–robot interactions, may shed light on such issues. Such reflections can then be integrated into the humanitarian framework for further evaluation and/or fine-grained analysis.

Humanitarian drones and the human–robot interaction

The contribution of robot ethics

As mentioned earlier, drones fall within the broader scope of robotics and as such robot ethics can be a useful tool for analyzing this innovation. Robot ethics is a relatively new field of study that is considered applied ethics in so far as it takes current ethical theories and/or principles and analyzes how the theory or principle can be used to evaluate the design, development and/or implementation of robots. Robot ethics does not exclusively focus on the final robot product; rather, it addresses the entire life cycle of robotics, from the stage of idea generation through design and development and onwards to deployment and regulation. As such robot ethics covers a broad range of ethical issues and questions related to robotics writ large (Capurro 2009; Lin et al. 2011, 2017; Sharkey 2008; Veruggio and Abney 2011).

Perhaps the most central aspect of robot ethics is its focus on the human–robot interaction (HRI) and the consequences this interaction can have for the development of good individuals, the good life of individuals and/or the good life of groups. Some of the more canonical texts draw attention to the use of care robots for elderly people and how this practice will create a stark decline in the good life of this demographic measured in terms of their social interactions with other humans (Sharkey 2014; Sharkey and Sharkey 2012; Sparrow and Sparrow 2006). Alternatively, there are those who believe that robots can make a positive contribution to the good life of individuals in so far as they enhance autonomy (Anderson and Anderson 2010). Whereas others focus on positive applications of robots, for example those that can remove humans from the dangerous task of cleaning up electronic waste (Alvarez-de-los-Mozos and Renteria 2017).

In what follows we take the starting point of robot ethics, namely the focus on the HRI, and use this as a platform for the ethical evaluation of the drone. In this way we are looking at the impact that the HRI (achieved through the drone) will have on the provision of humanitarian care. We look specifically at the impact of the drone on the principle of humanity. In order to go deeper into the evaluation, to provide a more fine-grained analysis of the expression of humanity in a particular humanitarian practice, we present a possible future application of a drone in humanitarian action as a case study, namely the use of drones for the inspection of detention centers. This application allows us to consider a trend in humanitarian care whereby the named task (e.g. inspection of detention center) actually fulfills multiple ends.

This case study allows us to unpack the varieties of HRIs and in so doing to uncover ethical issues (tied to the principle of humanity) associated with each. The case study we present here is one in which a drone is (theoretically) used for the inspection and/or surveillance of a detention center in place of a humanitarian aid worker visiting the center in person. A detention center is a place where individuals are held by immigration authorities of a country they are entering. Oftentimes these individuals are migrants seeking asylum or passing through the country to get to another. They are often fleeing from persecution.Footnote 11 Such an application, i.e. drones to inspect detention centers, is hypothesized here based on the current estimates of drones for ‘global infrastructure applications’ (approx. $45.2 bn) where infrastructure includes monitoring, maintenance, and asset inventory.Footnote 12 The industry push to use drones combined with the push for humanitarian organizations to ‘innovate’ is a compelling argument to use drones. This kind of outsourcing would allow for fewer personnel to travel to detention centers (saving time and money) and has the potential to reduce safety concerns of humanitarian aid workers. Despite these benefits, it is important to consider what is at stake.

The drone, as we are suggesting here, would be used to inspect the living conditions of a detention center and would do this by collecting visual logistics and data of the grounds. These are tasks normally reserved for humanitarian staff that visit the detention center in person and engage in in-person interviews with detainees to assess living conditions. In such instances, the International Committee of the Red Cross (ICRC), “work as an impartial, independent, and neutral organization within the framework of private, confidential interviews with detainees, and of a confidential dialogue with the detaining authorities” (Bouvier 2012, p. 1538).

Aid workers visit detention centers to assess living conditions but they also take reports of abuse and torture of detainees, and in the process try to advocate for changes in these conditions while also providing moments of positive experiences to the detainees. These moments can be incredibly influential on the life of a detainee and can be as simple as sharing a cup of coffee or tea with a detainee. Such moments have multiple consequences, they assist the aid worker in understanding and interpreting the situation (e.g. what kind of space is being provided, what kind of support or lack thereof is being provided and so on) and they provide relief for the detainee in often horrible living conditions. At the time of this paper, August 2019, there is considerable concern raised over the horrible conditions of the individuals held at detention centers at the border between the United States and Mexico where parents and their children are being separated from one another and kept in atrocious living conditions.Footnote 13 Inspection of such centers plays an integral role in applying pressure on governmental and human rights organizations to call for change.

In the drone application proposed here, there are at least two dyadic interactions between the drone and humans: the HRI when the human is the humanitarian aid worker (either the drone operator or the worker replaced by the drone), and the HRI when the human is the beneficiary (the individual on the ground who will experience the drone, the refugee, the asylum seeker etc.). In each of these instances we are evaluating the HRI through the lens of the principle of humanity. For the former—when the human is the humanitarian aid worker—we suggest a central ethical issue to be the loss of contextual understanding given that the human is no longer physically present to observe living conditions. For the latter—when the human is the beneficiary—we suggest three threats to the dignity of beneficiaries.

To be sure, we do not claim that the potential ethical downfalls end here; this list may continue to expand as we see the further use of drones in this space and as we consider the variety of aid practices within which the drone will be deployed. Furthermore, we do not claim that this discussion of ethical issues renders the use of drones in humanitarian action unethical; rather, in order to implement this technology responsibly, it is necessary to open up a space for dialogue that reaches beyond the practical details of implementation; rather, to engage in a detailed look at how care is being provided and the impact the robot will have on this. We explore two opposing themes in the humanitarian space: respect for the humanitarian principles on the one hand and the ‘technologizing’ of care on the other. Because of the opposition of these two movements careful attention to the ethical issues introduced by robot use is necessary. Lastly, it is also not our aim to solve all of these issues in this work. Instead, we wish to provide the groundwork for further research, in short to provide an approach for the ethical evaluations of new robots into the humanitarian space.

Human–robot interactions when ‘H’ is the humanitarian actor

In a discussion of human–robot interactions, when the human in this phrase is the humanitarian actor (i.e. the drone operator or the person whose role is replaced by the drone operator) there are specific ethical issues to address, in particular a loss of contextual understandingFootnote 14 and the consequences of this for the development of humanitarian skills (Comes 2016a, b; Donini and Maxwell 2014). To explain this, we look to the growing pattern of out sourcing in humanitarian aid. Humanitarians are progressively under pressure to operate in high risk areas and conflict zones. In response to such pressure, humanitarians have increasingly resorted to subcontracting portions of tasks as well as remote management of tasks through information and communication technologies (Collinson and Elhawary 2012) such as mobile phones, remote mapping, crowd sourcing, use of ‘big data’ and—of course—drones (Donini and Maxwell 2014).

In these instances the technology becomes an extension of the humanitarian aid worker in so far as it is a tool used to accomplish a task with greater precision and/or efficiency; the mobile phone is used to communicate between workers, the map is used to create a strategy for providing aid. The use of drones to inspect detention centers, however, may run the risk of creating a false separation between the completion of tasks and the promotion of humanity. A large portion of humanitarian aid is about assessing the wellbeing of refugees to gauge where the most pressing needs are found and oftentimes to provide small acts of kindness to relieve stress. Human workers are needed to assess the physiological and emotional wellbeing of other humans in distress as well as to provide support and reassurance. In fact, in an Opinion note of Paul Bouvier for the International Review of the Red Cross, it is discussed just how much of the work of humanitarians (at the ICRC in particular) is about small acts of kindness “in the field amidst armed conflicts and violence” (Bouvier 2012, p. 1538). Thus, the principle of humanity is not itself a separate task; rather, humanity is expressed in the midst of other humanitarian care practices.

What’s more, the minimal contextual understanding of a detention center that the drone promotes encourages an impoverished assessment of detainee needs. The life, and the needs, of the person held in the detention center are traditionally assessed by in-person communications. The criteria for inspection, for the drone, would most likely take into account the logistics of a detention center (e.g. how many rooms, how many people are stationed there, are fights observable) rather than the wellbeing of detainees (e.g. are they experiencing post-traumatic stress, are they in suffering from illnesses, what is their neurological status). Seen through the lens of the drone, detainees may no longer be unique persons; rather, they will be data points included in a calculation of the functioning of the center.

The culmination of this lack of contextual understanding for the humanitarian aid worker is that it threatens the development of humanitarian skills.Footnote 15 To explain this point, we suggest that in learning how to be a good humanitarian care provider one must receive on-site training for developing tacit knowledge about interpreting needs and providing aid. Of equal importance, being present is necessary for cultivating the indispensable character traits emblematic of a good humanitarian worker (which can also be seen in terms of the humanitarian principles, to act with neutrality, impartiality, independence and due attention to humanity). In the same way a teacher learns from each student she teaches or a doctor learns from each patient she treats, the aid worker learns from each beneficiary she assists. This learning happens through moments like inspection, moments that are geared at the inspection of living quarters but that serve multiple ends (i.e. providing relief to detainees). From this vantage point one must consider that if humanitarian actors are not as often on the ground interacting with beneficiaries (because drones and/or other technologies are used to replace such efforts) there exists an imminent threat to the cultivation of skills necessary for being a ‘good’ humanitarian aid worker. Consequently, the drone may, through its push to de-contextualize care, also threaten to de-skill aid workers.

The risk for de-skilling of humanitarian aid workers is not an insurmountable problem per say, it may be mitigated if the right measures are put in place for training and honing the necessary skills. This requires recognition of the strengths and limitations of both the drone and the human actor(s). In other words, recognizing that a drone can provide visual data from hard to reach (or dangerous) locations is useful for creating a representation of the area to be inspected whereas the human actor who is able (if properly trained) to visually assess the inside living conditions as well as taking stock of the existential state of beneficiaries. A drone used for surveillance when ICRC workers are visiting a detention center, may be a valuable compliment to the humanitarian mission in that the drone is providing logistic information about the detention center as well as protection for the ICRC worker inside who is speaking to individuals. This can only be achieved if ‘understanding’ the situation takes on a more robust meaning to account for both the logistics of the situation as well as the wellbeing of the individuals in need.

In the ICRC visitation example, one may consider that inspection of a detention center might occur through the use of a drone (that the human operated drone can surveil the grounds as a tool for assessing the living conditions within) or through the use of a human aid worker (that a human assesses the wellbeing and living conditions of detainees). What we mean to say here is that such an inspection, i.e. one accomplished by a drone rather than a human, reduces the understanding of the context to logistics. Moreover, it fails to account for the opportunity that human inspection provides to both the human aid worker and the human detainee. While our focus here was on the risk to the human aid worker, in terms of what the impoverished contextual understand amounts to (i.e. their de-skilling), we also note that there is a risk to the human detainee, a loss of seemingly simple moments in his/her life that brought joy amidst terrible times.

Human–robot interactions when ‘H’ is the beneficiary

When the human in the HRI is the beneficiary there are specific ethical concerns about the drone’s impact on the dignity of these individuals.Footnote 16 First, how the drone threatens the loss of humanity by limiting human-to-human interaction; second, the potential loss of dignity through a lack of ‘informational transparency’; and third, a threat to dignity by failing to account for physiological and psychological impacts of the drone.

As discussed in the previous section the drone could in some instances be used as a substitute for current practices in humanitarian aid and ultimately remove humanitarian workers from engaging in human contact with the beneficiaries. Whereas in the previous example the focus was on the impact to the humanitarian aid worker (i.e. de-skilling) here we aim to shed light on the possible consequences this may have on the dignity of the beneficiary, in this instance the detainee. In the piece by Bouvier (2012) he tells a story of a detainee who receives a small gift of perfume from a humanitarian worker: “A few drops of perfume that restored their feeling of human dignity” (Bouvier 2012, p. 1542). Through these simple acts of kindness, as Bouvier notes, “in some dehumanized places, humanitarian care can provide drops of humanity” (ibid). Above all, “health professionals working with victims of abuse and extreme violence have played key roles in recognizing the mental suffering related to violence and inhumane conditions…” (ibid). Recognizing such suffering, empathizing with individuals in pain, and making small steps to try and alleviate this inhumane treatment are acts of humanity, they are acts that show a respect for the dignity of these individuals at the most basic level. The temptation to use drones as a way of making processes of inspection or surveillance more efficient could do more than systematize a process, it may threaten to remove opportunities to restore dignity to individuals suffering from the most extreme conditions.

But a lack of human contact isn’t the only possibility for drones to threaten the dignity of beneficiaries, another is ‘the lack of informational transparency’ (Thomasen 2018; Aimee van Wynsberghe et al. 2018): “it is important to note that some risks may come in the form of how bystanders perceive the drone flying nearby… it can feel invasive to the people encountering it in the airspace above them” (Aimee van Wynsberghe et al. 2018, p. 15). It should be noted that when a drone is flying overhead of a detention center, a refugee camp, or a migrating group of individuals fleeing from persecution, there is at this point in time no indication of who is flying the drone (e.g. a humanitarian NGO, a malicious government or group, a private company, or an unconnected individual) or what the drone is being used for (e.g. surveillance, mapping, tracking individuals, inspection etc). This lack of information about who is flying the drone and what it is there for can be extremely disconcerting to individuals already in fear of their lives. Thus, in addition to the prevalent issues centering on the data being collected (how it is done and what it is used for) is the issue of how the drone is perceived and the lack of information about the drone, i.e. the lack of informational transparency’ can lead to an undignified emotional stress of the beneficiaries below.

The third aspect of dignity of the beneficiary concerns the inadequate account of safety. Safety of drone operations has been discussed to date and centers on take-off, landing, and flight (Soesilo et al. 2016). Not only should we be concerned with the proper flying of the drone (this is of course a concern) but we should also be concerned with the behavioral, physiological and psychological wellbeing of those experiencing the drone (Aimee van Wynsberghe et al. 2018). In the same way that military applications of drones have led to what is termed the “chilling effect” (Clarke 2014)—described as the impact on individuals which prevents them from carrying out their days as normal because of drones overhead—drones in humanitarian contexts may also have similar “chilling effects” on humanitarian beneficiaries living in detention centers. Not only is the resulting chilling effect a kind of dehumanization but the lack of consideration thereof is also a threat to the dignity of those on the ground.

To concretize this suggestion we turn to the academic research studying physiological response in animals when exposed to drones in the wild (Ditmer et al. 2015; Hodgson and Koh 2016; Rümmler et al. 2016; Vas Elisabeth et al. 2015).Footnote 17 Some researchers have begun to study the physiological response in animals, in particular American black bears; “cardiac biologgers reveal that bears exhibit a stress response to UAV flights” (Ditmer et al. 2015, p. 2278). This response was found even when the bear did not show any behavioral response to the drone, meaning it did not run away when the drone was overhead. In other words, behavior and physiology may not tell the same story.

Whereas we wholeheartedly agree that safe take-off, landing, and flight are core elements for the moral acceptability of the drone in humanitarian action, given the research on animals presented above we add that safety ought to be broadened to account for the physiological, behavioral, and psychological impacts on the beneficiaries experiencing the drone. We insist that the moral acceptability and desirability of drones in humanitarian contexts is dependent on the findings from such studies and that tailored ethical guidelines for drone deployment in humanitarian action be created to reflect the results of such studies. In short, we can only protect the dignity of beneficiaries when we understand in what ways it is being threatened.

To be sure, we also acknowledge the reality of the situation; when acquiring data on HRI in the real world, it is often difficult to acquire the necessary approval pertaining to issues of: insurance and liability (who will be sued in the event that a person is harmed or injured), ethics (if in an academic setting), or Municipal endorsement (when speaking of robots in a city space such as for delivery robots on the ground or in the air). Such restrictions make it next to impossible to place robots in the wild (aka in public spaces) in order to test their performance in real-life situations. And yet understanding both human and robot actions in real world settings are a necessary condition for the success of future robot applications (in terms of both acceptability and safety). To be sure, such restrictions are of course for the safety and protection of humans and it is not a good idea to forego said precautionary measures. On the other hand, if there were a place in which robots could be tested without having to jump through the bureaucratic hoops listed above this would be an enticing possibility for robot companies.

This last point leads to a related danger that we must consider. Given the urgency in humanitarian response the argument is often made to go forward with a new product for the benefit of those in need. As a result one must consider the potential consequences this might have:

“testing in impoverished zones may create a two-tier dichotomy in which highly technological countries benefit from the information learned from flight experiments conducted in less-advantaged countries…” (Jack Chow 2012, p. 7).

Thus, while we have made the argument that studying the impact of drones on human physiological and behavioral response is necessary to determine the morally permissibility of the drone in humanitarian action we acknowledge it may open the possibility that such testing happen exclusively in humanitarian settings for the benefit of more privileged countries. To account for this, we suggest that the vulnerable demographics at risk in these cases are deserving of enhanced protection and security rather than allowing the use of this technology without strict ethical and regulatory guidelines. Moreover, given the asymmetry in power between those deploying the technology (i.e. the humanitarian actors and organizations) and those experiencing the technology (i.e. the beneficiaries with no home, no belongings and oftentimes no family with them) there will be little occasion in which the testing methods are called into question. It is thus imperative that if these technologies are going to be used in these contexts for the benefit of individuals in need that we subject the technology to: testing of its functional capabilities; testing the physiological and behavioral impact of the drone on humans in multiple contexts; and a variety of ethical analyses that shed light on the increasingly technologizing of humanitarian care.

Conclusion

The use of drones in humanitarian contexts has grown so significantly in the last 10 years it is suggested that almost every humanitarian NGO owns and operates their own drone. In order to answer the question of should drones be used in humanitarian contexts we suggest that it is first necessary to understand the stakes—what are the ethical issues at play with the deployment of this technology in this context?

We suggest in this paper that the strength of the humanitarian principles approach to answer questions of aid provision can be complimented by a technology-facing approach, namely that of robot ethics. Robot ethics places an emphasis on the human–robot interaction and the unique ethical issues resulting from this interplay. In this paper we assessed drones in humanitarian practices through a proposed application (or case study)—drones used for inspection of detention centers. We have shown that for humanitarian actors we ought to be concerned with the lack of contextual understanding afforded to aid workers culminating in a risk of de-skilling. For the beneficiary we have raised three concerns associated with the dignity of this group, in particular: a threat to the principle of humanity by reducing human–human interactions; a threat to dignity through a lack of informational transparency; and, a threat to dignity by failing to account for the physiological and behavioral impacts of the drone on human actors. Although we acknowledge the obstacles (and dangers) associated with understanding the physiological and behavioral impacts we insist that without a clear picture of how the drone impacts humans they cannot be deployed in a morally permissible or acceptable manner.