1 Introduction

Over the past decades, scholars have dissected the manifold ways in which artificial intelligence (AI) systems and digital technologies impact pillars of the law in fields such as human rights law, constitutional law, criminal law, tortious liability and contracts, administrative law, international humanitarian law, and more (Barfield & Pagallo, 2020; Pagallo, 2013a). According to the High-Level Expert Group on Liability and New Technologies Formation, set up by the European Commission in 2018, the challenges brought forth by AI in the legal domain depend on the complexity, opacity, openness, autonomy, predicability, data-drivenness, and vulnerability of computers that mimic human intelligence. In the phrasing of the report on liability for artificial intelligence, “each of these changes may be gradual in nature, but the dimension of gradual change, the range and frequency of situations affected, and the combined effect, results in disruption” (HLEG, 2019, at 32).

Since the mid-2010s, scholars and institutions have increasingly focused on another crucial facet of this legal disruption, that is, the challenges of AI in outer space. There are AI systems for avoiding satellite collisions, or that can help humans find and remove space junk, e.g., the space mission ClearSpace-1 of the European Space Agency. A further class of AI systems concerns space-based services, such as the use of Global Navigation Satellite System (GNSS) signals to support the functioning of self-driving cars, drones, emergency response services, maritime and agriculture services, and more. The combination of satellite data and AI systems can also help monitor production changes of carmakers or plane traffic at airports. In addition, there is a new generation of autonomous astronaut assistants, such as the Crew Interactive Mobile Companion (CIMON), that enables voice-controlled access to media and documents, navigation through operating and repair instructions, and down to planetary exploration, especially in conditions too dangerous or prohibitive for humans. The panoply of AI systems for collision avoidance, AI and space data, or AI that supports human activities, may however raise thorny issues of space law that regard damages to be covered, the liability regime to be applied, or the procedure to be followed once such damages occur in space or here down on Earth (Bratu et al., 2021). One of the main assumptions of this paper is that the pillars of the current legal framework—which revolves around the international treaties for outer space signed under the auspices of the United Nations in the 1960s and 1970s—fall increasingly short in coping with the challenges of AI (Martin & Freeland, 2021).

However, the novelty of these challenges does not only hinge on that which makes AI unique, i.e., the opacity, openness, autonomy, predicability, etc., as stressed by the HLEG of the European Commission with the corresponding issues related to malfunctions and accidents caused by AI systems, biased input data, and more (HLEG, 2019). But also, it rests on considering the uniqueness of outer space and the specificities of its legal challenges. It is our contention that the unique features of AI technologies, once displayed in outer space, potentiate current trends in space technology and space economy—such as the growing importance of private actors and the increasing dependence of Earth on space-based services—that will require the adoption of new legal standards.

In addition to standards of conduct, such as norms, values, or principles that can be adopted as the basis of a legal decision, standards provide thresholds of evaluation that should allow policymakers and courts to assess the benefits and risks of technology (Busch, 2011). So, the analysis of this paper draws attention to the role that standards play in this context to point out what is unique to the set of legal issues that AI systems pose in space activities. These standards concern either sui generis standards of space law or stricter or more flexible standards for AI in outer space, down to what we present as the “principle of equality” between human standards and robotic standards. AI systems include both space applications and spacecraft operations, such as autonomous space objects and humanoid artificial agents for human wellbeing in deep space missions. Since the growing use of this technology goes hand-in-hand with current projects on space tourism, space hotels, mass space exploration, and similar topics, we think that the democratization of outer space, so to speak, will trigger a new generation of tricky issues that concern the development of new ethical and legal standards. For example, what are the standards for the diagnosis and treatment of human peritonitis through X-rays and the robotic arms of a humanoid AI system that shall operate under urgency between Mars and the Moon?

In order to provide a hopefully fruitful introduction to the normative challenges of AI in outer space, the paper is divided into five parts. These five parts aim to cast light on what is old and what is new with today’s troubles of space law—from potential antinomies between public international law and national space law to current trends on the privatization of space—vis-à-vis the challenges that AI raises in this field. Section 2 illustrates the current state-of-the-art in space law with its old and new legal issues. Section 3 clarifies the troubles of space law with technology. Section 4 dwells in particular on the multiple ways in which AI systems may impact today’s legal framework, namely, the rules, norms, and principles of public international law. Section 5 focuses on the case study of humanoid robots equipped with AI systems that will populate the next generation of space missions. Section 6 complements this scrutiny with the impact of AI on legal standards. While Sect. 4 discusses the legal issues that AI raises, indifferently, both on Earth and in space, Sects. 5 and 6 shed light on the normative issues that AI raises specifically in outer space. Drawing on the basics of legal theory, applied ethics, and elements of the philosophy of technology, the overall intent of this paper is to flesh out how and why AI triggers unique challenges in the legal domain and why some of these challenges are unique to space activities. The result of this investigation provides guidance for the development of new standards that shall govern the challenges of AI in outer space. Although this kind of debate is in its infancy, we reckon that, over the next few years, breathtaking advancements in technology, trends of space privatization, and current promises on the democratization of outer space will make this problem urgent.

2 The Pillars of Space Law and Their Troubles

Space law is the branch of law that aims to govern activities related to outer space and the corresponding technologies, such as spacecraft, satellites, space stations, in-space propulsion engines, deep-space communication networks, or support infrastructure equipment. Sources of space law traditionally refer to the international treaties set up in the 1960s and 1970s, that is, when technology made some human dreams come true. Consider space missions and planetary explorations with both humans and highly sophisticated computers aboard, although the computation power of the 1969 mission on the Moon was that of an old iPhone 5. The amazing developments of space technology have required a new set of principles and specific regulations. The first pillar of this domain is the international Outer Space Treaty from 1967, “on principles governing the activities of states in the exploration and use of outer space, including the moon and other celestial bodies.”Footnote 1 Three agreements specify provisions of the Outer Space Treaty under the auspices of the United Nations (UN):

  1. (i)

    The Rescue Agreement from 1968, namely, the agreement on the rescue of astronauts and their return to earth, together with the return of objects launched into outer spaceFootnote 2;

  2. (ii)

    The Liability Convention from 1972 for damage caused by space objectsFootnote 3;

  3. (iii)

    The Registration Convention from 1976 on the registration of objects launched into outer space.Footnote 4

In addition to such provisions of public international law, most State Parties have developed their own regulatory frameworks on space-related activities; for example, Luxembourg approved its law on the exploration and use of space resources in July 2017Footnote 5; Germany passed its federal law on protection against security risks due to the dissemination of high-grade earth remote sensing data in November 2007Footnote 6; etc. The overall architecture of this legal field can thus be summed up according to a state-centric approach that revolves around the powers of sovereign states and the responsibilities and duties they have under the UN legal framework. This state-centric approach is comprehensible since the pillars of space law were created 60 years ago when states had a sort of space oligopoly and the principle of national sovereignty represented the core of the legal system. This crucial role of states raises, however, a twofold problem. The first issue, a popular topic among experts in space law, is dealing with the coordination and potential antinomies between public international law and national space law (Gabrynowicz, 2010). The second issue regards trends of privatization. At the international law level, the Outer Space Treaty gives responsibilities to the states in which the activities of private companies are carried on. Scholars and institutions alike—e.g., the UN office for outer space affairs, or “UNOOSA,”—have stressed time and again the growing shortcomings of the law in tackling the activities of private companies (Deem, 1983; Ernest, 1991; Ziemblicki & Oralova, 2021; etc.). For example, it can be tricky to determine what is the proper jurisdiction when a private company, incorporated in one country, launches a spacecraft in a different country (Freeland & Ireland-Piper, 2022).

These scenarios will reasonably multiply in the next future. Around 70% of space activity was driven by the private sector in the late 2010s, and figures will realistically keep growing over the years. The UNOOSA’s 2018 annual report predicts that the space business will generate revenues of 1.1 up to 2.7 trillion dollars by 2040 (UNOOSA (UN Office for Outer Space Affairs) 2019, at 38). This crucial role of private actors, such as Blue Origin and SpaceX, has been dubbed the “New Space” (Vernile, 2018). It is against these trends that we can fully appreciate the growing loopholes of the law in tackling the disruption of private companies in the space sector. The general rule remains that of the old public international law, according to which the duties and obligations of private companies mostly hinge on the responsibilities and accountability of sovereign states.

In particular, Art. VI of the 1967 Outer Space Treaty establishes that states are internationally responsible for national activities in outer space; on the other hand, Art. VII of the Treaty establishes the international liability of the states that launch—or procure the launching of—an object in outer space for the damages that such an object may provoke to other states or to their natural and legal persons. Such liability is either absolute or depends on fault. The 1972 Liability Convention distinguishes between damages that a space object causes on the Earth’s surface or flying aircraft (Art. II) and damages that do not affect them (Art. III). Liability of Art. II is strict; it is up to the launching state to demonstrate its lack of responsibility due, for example, to the gross negligence of the claimant state pursuant to Art. VI of the Convention. Liability of Art. III, vice versa, depends on the fault of the launching state or of the persons for whom such a state is internationally responsible. Although the Convention provides a formal process for the resolution of disputes, it can be hard to determine whether a state has incurred fault or whether the damages covered include indirect damages (Christol, 1980). The doctrinal efforts of jurisprudence, e.g., the ruling of the International Court of Justice on due diligence duties and fault standards in the Corfu Channel case, leave several open problems in the legal domain (Dennerley, 2018).

This set of traditional issues of space law on notions of fault and liability, due diligence, and other legal standards has to be revisited with the further class of issues posed by “the development of space, and its companion, space law” (Lyall & Larsen, 2017), namely, the troubles of space law with technology. Such problems regard the democratization of outer space—that follows the dramatic decrease in costs for space missions and spacecraft— as well as the new challenges brought about by the advancements of technology, such as the regulation of suborbital flights for scientific missions and human transportation (Pekkanen, 2019). In 2016, the European Space Agency “ESA” released two reports on “Robots in Space” and “What is Space 4.0” to stress the benefits and limits of AI systems for outer space activities. Pros and cons shall be grasped in accordance with the basics of space law provided so far in this section and vis-à-vis the peculiar kind of technology with which we are dealing, that is, space technology and AI systems displayed in outer space. Our claim is that the specificities of space law are inextricably intermingled with the unique challenges of the technologies that make such legal challenges possible. This is why, in addition to the current troubles of space law mentioned above in this section—from the state-centric pillars of international law to the vague notions or general clauses employed in legal texts, down to trends on the privatization of the space, or vice versa, the militarization of the space (Stephens, 2017), etc.—scholars should be attentive to the ways in which the bunch of technologies under scrutiny in this paper affects pillars of space law, such as the level of meaningful human control over the functioning of increasingly autonomous AI systems and robots.

A fruitful way to appreciate this impact, e.g., the disruption of AI that ESA mentions in its 2016 reports, has to do with the regulatory efforts on what is dubbed by the philosophy of law and technology works as “third-order technologies” (Durante, 2021; Floridi, 2013). We admit that several other kinds of third-order technologies exist out there, e.g., the set of applications for the internet of things. Yet, we think that the differentiation of technological orders is critical to understanding the troubles of space law with technology, in particular, the disruption of AI systems in outer space since the mid-2010s. What is the difference between these three different orders of technology, and why are they relevant in space law?

3 The Legal Challenges of Third-Order Technology

A first-order technology is in-between humans and Mother Nature, e.g., the axe. Humans share with many species this ability to make and employ simple first-order technologies. This scenario can be visualized as the hero of the ape-like tribe in Kubrick’s famous 2001: A Space Odyssey holding a bone and realizing how it could be used as a weapon. Yet, only humans have come up with the idea to create tools that are no longer related to the interaction with other humans and the natural environment but rather to other technologies. Examples of second-order technology, that is, technologies that are in-between humans and other technologies, abound in our everyday lives: keys and lockers, the screwdriver and the screw, etc. “The engine, understood as any technology that provides energy to other technologies, is probably the most important, second-order technology” (Floridi, 2013, at 112).

The next game changer was the advent and proliferation of third-order technologies and, notably, since the 1950s, of information and communication technologies (ICT). As shown by the orbital satellite in Kubrick’s match cut, ICTs and AI systems, such as HAL 9000, work in between other technologies that set up the environment of the system. Third-order technologies provide, in other words, the environment in which further technologies interact so that humans are no longer needed in the loop. Barcodes, high-frequency trading systems, and the myriad applications of the IoT illustrate this new order of technologies that function in between other technologies, supplementing humans as users of products and services. “Projects to build self-assembling 3D printers that could exploit lunar resources to build an artificial colony on the Moon may still sound futuristic, but they illustrate well what the future looks like” (Floridi, 2013, at 114).

The governance of third-order technologies displayed in outer space has worked reasonably well over its first 50 years (1960–2010). Since the mid-2010s, the speed of AI innovation, however, has suggested several scholars and institutions to increasingly pay attention to the disruptive effect of such innovation vis-à-vis rules and principles of space law (Bratu et al., 2021; Martin & Freeland, 2021; etc.). Although the rate of deployment of AI systems, either in space missions or for space-based services, seems exponential, we may dare to say that the law should have been ready for this disruption. Space technology, as well as AI and robotics, are par excellence third-order technologies. Deep space missions illustrate how technologies function in between further space or other ancillary technologies that set up the environment that makes space activities possible. This remains true even in the future of deep space missions with humans aboard. The benefits of this automatization have, of course, a price. Kubrick’s HAL 9000 reminds us of how AI systems can be overused or misused in space missions, devaluing human skills, removing human responsibility, reducing human control, or eroding human self-determination (Floridi et al., 2018). What renders space legally challenging has thus to do with the kind of technology that makes the exploration and exploitation of space resources possible: the third-order technology that materializes both an old human dream and its potential nightmares.

Interestingly, one of the features of AI that have most attracted the attention of scholars and the public at large is the “autonomy” of such systems and the robots equipped with them. Autonomy means that AI systems and smart robots can modify their inner states or properties without external stimuli, hence exerting control over their own actions without any direct intervention from humans (Pagallo, 2013a). The increasing use of AI systems that augment or replace analysis and decision-making by humans, making sense of huge streams of data or defining and modifying decision-making rules autonomously, is a sound example of how third-order technology works. The effects of such autonomy, from a legal viewpoint, can be nonetheless controversial.

We already mentioned that ascertaining the legal effects of what the state-transition system of an AI application “decides” to do can be tricky in human rights law (Barfield & Pagallo, 2020), criminal law (Hallevy, 2015), constitutional law (Hildebrandt, 2015), business law and contracts (Sartor, 2009), tort law (Pagallo, 2013a), administrative law (Calo & Citron, 2020), and more (Chesterman, 2021). So, what are the ways in which AI systems may affect the rules and principles of space law?

4 The Impact of AI on Space Law

The normative challenges brought forth by third-order technologies—par excellence, the challenges of AI in outer space—are often summed up with the problem to determine and enforce a “meaningful human control” (MHC) over the entire technological cycle and system functioning. The 6-year-old debate under the auspices of the Certain Conventional Weapons (CCW convention) and protocol groups of the United Nations shows the difficulty of the task.Footnote 7 The challenges of autonomous technologies regard the analysis of the “human element” in the use of lethal force, the classification of the systems under investigation, and the review of potential military applications of related technologies, down to their impact on the principles and rules of international humanitarian law (Taddeo & Blanchard, 2021). What this institutional work in progress illustrates is that “human control of autonomous technologies” is not an oxymoron. There is no such alternative between human control over autonomous technologies on the one hand and, on the other, the further development of autonomous systems that take crucial decisions by themselves. MHC is the subject of academic courses on the risks and threats brought forth by possible losses of control of AI systems, so as to keep them in check (Pagallo, 2017). Rather than playing a zero-sum game, the aim should be to attain a fair balance between human control and AI autonomy. In the field of space technologies, it is even obvious that the autonomy of AI systems should be strengthened, for example, for the management of complex constellations that should reduce the workload of ground operators or for the guidance, navigation, and control of rovers that should remove human scheduling errors and find their way across unspecified fields, navigating around obstacles either on the Moon or on Mars.

The balance that shall be struck between human control and AI autonomy can be measured by the degree of “social acceptability” concerning the risks inherent in the automation process, as well as the level of social and political cohesion that regards the values and principles that are at stake with the development of increasingly autonomous technologies (Pagallo & Durante, 2016). Therefore, we can appreciate the legal impact of AI considering the values and principles as well as the rights and interests affected by autonomous technologies that may fall within the loopholes of current regulations. The impact of autonomous technologies on matters of liability, accountability, and responsibility depends, of course, on the legal framework under scrutiny, that is, the specific features of criminal law, international humanitarian law, tort law, administrative law, etc. In the field of space law, scholars have progressively discussed principles and rules of the field that may fall short in dealing with the interactivity, opacity, and unpredictability of autonomous “space objects.” AI challenges of space law include the damages to be covered, the procedure to be followed once such damages occur, or whether and to what extent the decisions of these “objects” fall under the fault of persons for whom a state is liable (Bratu et al., 2021, at 437). These liability issues also concern the use of AI systems for space-based services, such as AI systems using Global Navigation Satellite System (GNSS) signals to support emergency response services, autonomous vehicles, unmanned aircraft systems, and more (Bratu 2022).

In light of this class of legal issues posed by AI in outer space (Martin & Freeland, 2021), it is our contention that the current debate on such issues can benefit from the 30-year-old debate on the different ways in which the interactivity, autonomy, and adaptability of AI robotics affect pillars of the law (Karnow, 1996; Solum, 1992). At times, the law should reinforce current strict liability doctrines for the risks posed by third-order technologies. This is what is under discussion with the CCW efforts in international humanitarian law and, although under revision, with the set of duties and obligations for designers, manufacturers, and even end-users of high-risk AI applications in EU law, i.e., what the EU lawmakers plan to establish in the civilian sector with the Artificial Intelligence Act or “AIA.” High-risk AI uses include biometric identification systems that aim to categorize humans, or that are employed for education and vocational training, or for employment and worker management, etc.

Methods of technological regulation that hinge on strict liability rules are often complemented with the extension of current tortious liability doctrines to tackle compensation gaps in accidents provoked by AI systems. The doctrines of tort law include duties of care and theories of agency, as well as procedural standards on burdens of proof, presumptions, etc. To tackle the accountability gaps of AI, some experts such as the 2019 HLEG’s report on “liability for AI and other emerging technologies,” have recommended the extension of tortious liability doctrines. For example, the burden of proving a defect should be reversed under several circumstances, such as (i) disproportionate difficulties or costs related to determining the safety of the AI system or that a certain level of safety has not been met; (ii) cases in which the victim has no reasonable access to the information or that information has not been logged, thereby “triggering the presumption that the condition of liability to be proven by the missing information is fulfilled” (no. 15 of the report); (iii) disproportionate difficulties and costs that exist for determining either the relevant standard of care or to prove its violation; etc. These recommendations could be fruitfully extended to tackle some liability issues of space law that go hand-in-hand with current trends on the privatization of outer space.

Scholars expect that liability issues of space law will increasingly regard private parties (Larsen, 2019). Contractual issues, or tortious liability claims related to space activities, have been so far addressed mostly through arbitration, i.e., the Permanent Court of Arbitration set up by the 1976 UNCITRAL arbitration rules, or alternatively, the “Optional Rules for Arbitration of Disputes Relating to Outer Space Activities.” Yet, we may wonder about the level of legal protection that private companies should guarantee in outer space vis-à-vis the protection of the basic rights of space tourists, space explorers, or space settlers (Freeland & Jakhu, 2014, Lim 2020). For example, the Cable News Network (CNN) announced on 2nd May 2022 the first space hotel was scheduled to open in 2025.Footnote 8 These scenarios suggest the rise of a new generation of tortious claims and contractual issues of space law that affect the kinds of rights to be protected or the kinds of damage to be covered, due to either the intricacy of the legal chain of causes and effects in complex digital environments (Karnow, 1996; Martin & Freeland, 2022) or cases of distributed responsibility in which unjust damages are caused by local interactions that are in themselves not illegal but rather morally neutral (Floridi, 2016). These scenarios will likely multiply the cases in which either no human is liable for unjust damages or the victim is unable to flesh out a tortfeasor (Pagallo, 2011).

Traditional policies on strict liability for producers and operators of AI systems can be adapted to this context, together with the extension of fault-based liability regimes for every human wrongdoer. However, cases of distributed responsibility or inextricable legal causation show that the extension of traditional approaches to the law is often not enough. A legal analogy has its limits. Cases of hacking, for example, illustrate how current rules of tort law may prove insufficient to defend victims of cyberattacks. It can be difficult, if not impossible, for the individual victim of a cyberattack to identify a human tortfeasor. Scholars have thus recommended several ways in which the law can address the current loopholes in tort law. Going back to the HLEG’s report on “liability for AI and other emerging technologies,” one of these ways regards the implementation of compensation schemes: “compensation funds may be used to protect tort victims who are entitled to compensation according to the applicable liability rules, but whose claims cannot be satisfied” (no. 18 and 34 of the report). Therefore, in addition to the extension of current doctrines of tort law to some new scenarios of space law, the liability issues of AI systems may recommend stepping away from traditional approaches to the field endorsing new kinds of responsibility and liability also for the laws of space.

The increasing autonomy of space objects and the privatization of space missions do not only affect norms, values, or principles adopted as the basis of a legal decision in space law. The use of AI systems also affects the thresholds of evaluation that should allow policymakers and courts to assess the benefits and risks of technology. The ways in which autonomous technologies impact space law and its binding rules on public and private liability, or damages to be covered and rights to be protected, trigger two different kinds of problems. On the one hand, this section mentioned the work of legal scholars that focuses on the unique features of AI that impact tenets and norms of the law, whether regulating its uses in space missions or down here on Earth; on the other hand, the novelty of the legal challenge depends on the set of problems that AI systems pose only in outer space. Our conjecture is that the increasing use of AI systems, such as autonomous space objects, robots, and other artificial companions for deep space missions with humans aboard, realigns the thresholds of evaluation that current standards of human-AI interaction, social robotics, and the law have developed. The next section explores how far this idea goes with the case study of humanoid robots for the wellbeing of humans in outer space.

5 The Charge of Humanoid Robots in Outer Space

An increasing set of AI systems for outer space missions regards robotic applications for the wellbeing of humans, such as AI robots for healthcare and entertainment. According to the “sense-think-act” paradigm of AI research (Bekey, 2005; Esteva et al., 2019), these artificial companions and assistants shall have the ability to “perceive something complex and make appropriate decisions” (Thrun in Singer, 2009, at 77). AI robots for healthcare and the wellbeing of humans are part of today’s crews in routine space missions. Several of these applications will present humanoid features that induce humans to ascribe their own properties or characteristics to such AI robots. This trend rests on the benefits of the approach. A sound amount of work on human–robot interaction (HRI) and social robotics shows in fact how much the anthropomorphic features of these systems can be fruitful (Chandani et al., 2022). Empirical research has cast light on the advantages of humanoid robots that depend on functional purposes such as interaction with human tools and environments. In the mid-2010s, the USA agency DARPA organized a series of humanoid robotic challenges to conduct humanitarian, disaster relief, and related operations that proved “too great in scale and scope for timely and effective human response.”Footnote 9 A recent survey displays an overall preference for anthropomorphic features in healthcare (Klüber & Onnasch, 2022). Although the research did not consider the scenario of HRI in outer space, the outcomes of the research suggest that communication is a key determinant of robot preference: “the majority of research to date emphasizes a high degree of anthropomorphism with respect to robot appearance and communication to support positive perceptions of robots” (Klüber & Onnasch, 2022).

Meaningful engagement with humans is another ingredient in the success of such humanoid robots and other artificial companions. Considering the case of robots employed as persuaders to promote recycling, for example, some reckon that “robots, because of their anthropomorphic features, are more likely to evoke empathy than tablet computers, and thus robots can be more effective in promoting pro-social behavior” (Lo et al., 2022). The pros of humanoid AI robots in outer space do not mean that technology raises no risks. Most jurisdictions, e.g., EU law, would indeed consider such AI systems as “high-risk.” This high level of risk concerns also, but not only, the safety and security standards that all space missions shall guarantee. Safeguards and constraints of further fields of legal regulation, such as data protection and cybersecurity, come inevitably into play (Bassi et al., 2019). Humanoid AI robots that will populate the next generation of space missions and space hotels shall comply with such further fields of legal regulation as, for example, data privacy (Pagallo, 2013b). It is noteworthy, however, that most of such legal fields of technological regulation that affect the design, manufacturing, and use of AI systems and smart robots are currently under revision in Europe. The fields of machinery regulation, consumer protection, and cybersecurity are particularly instructive.

First, it is worth mentioning the evaluation of the European Commission’s Regulatory Fitness and Performance Programme (REFIT) of 2018. The report stressed certain shortcomings in the enforcement of the EU machinery directive 2006/42/EC and “found that despite its technology-neutral design, the directive might not sufficiently cover new risks stemming from emerging technologies (in particular robots using artificial intelligence technologies).”Footnote 10 The commission issued the proposal for a new regulation on machinery products (COM(2021) 202), on 21 April 2021, as part of the artificial intelligence package, which includes the AIA, presented that same day. Both proposals are under current revision by the council and the European Parliament.

Second, regarding consumer law, the commission issued a new proposal for the amendment of the 1985 directive on product liability, the so-called PLD regime, in September 2022.Footnote 11 Several crucial definitions of the old legal framework on software and digital products, or on causal relationships between defects and damages, fell short in coping with the challenges of AI robotics, such as domestic robots (Barfield & Pagallo, 2020, at 96). Although the new PLD regime, complemented with the provisions of the new AI liability directive,Footnote 12 do not consider the specificities of outer space, it seems fair to affirm that they represent a regulatory minimum for the protection of space explorers and space settlers that count on the help, assistance, or companionship of AI robots.

Third, the same holds for cybersecurity; the EU Regulation 2019/881, which establishes the new European Union Agency for Cybersecurity, or ENISA, repealing Regulation 2013/526, shall be complemented with further sets of provisions on the network and information systems directives (NIS), they too are under the current process of amendment. In addition, this regulatory framework shall be integrated with the governance of cybersecurity in outer space, especially for military purposes (Pagallo, 2015). Regulations of space law have indeed to be understood and complemented in accordance with the further challenges of cyberspace.

Considering all this current institutional work in progress, several tricky legal issues posed by the use of AI systems and smart robots, also but not only in outer space, thus remain inevitably open. However, we may conceive of a new generation of AI robots in outer space that abide by all possible regulations of the law, from personal data protection to machinery safety, from consumer law to cybersecurity, and yet such systems would raise a further set of issues that depend on the application of terrestrial standards to space activities. This conjecture is suggested by current trends in the privatization and democratization of outer space that go hand-in-hand with the challenges of AI in space missions. The next section aims to substantiate the conjecture by focusing on the normative challenges that AI systems raise only when such systems are deployed in or support outer space activities.

6 The Realignment of Terrestrial Standards

In the previous sections, we stressed the manifold ways in which AI systems and robots equipped with AI may affect current regulations of space law. In addition to thorny issues concerning fault and liability (Bratu et al., 2021), damages and jurisdiction (Freeland & Ireland-Piper, 2022), or the challenges of cybersecurity in outer space (Falco, 2019), the attention should be drawn to the realignment of legal standards that follows as a result of the increasing use of AI systems in outer space. Standard complement norms, values, or principles that are adopted as the basis of legal decisions. In addition to standards on burdens of proof and duties of care, negligence, and due diligence, the focus of this section is on the thresholds of evaluation according to which standards help evaluate the pros and cons of technology (Busch, 2011). Since this kind of research is still mostly unexplored, it is thus important to grasp the different ways in which the use of AI systems and smart robots in outer space may affect such standards.

In our view, the realignment of legal standards can be properly grasped in accordance with four different scenarios. Either the realignment regards sui generis standards for outer space (sub-Sect. 6.1); or standards that are either softer or stricter than terrestrial standards for HRI (sub-Sects. 6.2 and 6.3, respectively); down to the golden rule provided by the “principle of equality” between robot standards and human standards (sub-Sect. 6.4). The principle of equality should prevent supererogatory solutions to some extreme challenges of AI in outer space. The overall aim of this section is to offer some guidance for this new kind of research that speed of innovation and human ingenuity will progressively put in the spotlight.

6.1 Sui Generis Standards for AI in Outer Space

What standards should we adopt for smart AI agents and systems is a question often debated by scholars (D’Agostino & Durante, 2018); in the field of autonomous vehicles, for example, some claim that robot standards should be higher or stricter than human standards (Sparrow & Howard, 2017). The same holds true in several fields of medicine (Kempt et al., 2022; Pagallo, 2022).

Since the inception of space law in the 1960s and 1970s, however, the unique challenges of outer space have recommended the adoption of specific standards, e.g., security and safety standards for every space mission. Such standards include the ECSS system, i.e., the European Cooperation for Space Standardization, and the ESA/SCC Specification System, that is, the system of the European Space Agency for the specification, qualification, and procurement of the electrical, electronic, and electro-mechanical components for use in space programs. Recent technological advancements and the speed of technological innovation require the constant expansion of such programs and standards. For example, there is a considerable amount of work devoted to the development of new standards for AI systems in outer space that regard standards for cybersecurity and cyber risks analysis in extreme environments, e.g., the colonization of Mars (Radanliev et al., 2020).

Work on safety and integrity management of missions and operations in harsh environments is a well-established field of research (Golestani et al., 2020). Still, as occurs in other fields of AI innovation, the process of standardization regarding the uses of AI in outer space is just moving its first steps (Pagallo & Durante, 2022). To put it in the phrasing of the European Space Agency, “AI, and in particular ML, still has some way to go before it is used extensively for space applications… the complicated models and structures necessary for ML will need to be improved before it can be extensively useful.”Footnote 13 It is thus likely that the development of new technological standards for the uses of AI in outer space will represent one of the most relevant topics for experts in space law in the next few years.

6.2 Softer Standards for Space Companions

We should never forget that outer space is a faraway, unfriendly, and dangerous environment. According to the news of CNN on 4 June 2022, astronauts, such as Scott Kelly or Christina Koch, when returning to Earth after long space flights, “couldn’t wait to feel rain or ocean waves again,” saudade for the blue marble.Footnote 14 Such long space flights arguably require humanoids to face the mental and emotional challenges of deep space missions. We introduced this kind of robots equipped with AI above in Sect. 4, stressing that a consistent amount of empirical evidence on the pros of humanoid AI robots for human well-being shows that such benefits can be even stronger during a long space mission, rather than down here on Earth. Some think that the goal of social robotics in a space exploration context is to constructively develop an illusion of human traits in a machine to either help manage a need for a degree of social interaction or to extend human sensing and action through more immersive telepresence robotics (Zawieska & Duffy, 2014). Others explore new horizons, such as what may change when having sex with robots in a very long space mission (Scheutz & Arnold, 2016).

Less controversial examples illustrate how the bar of ethical and legal standards can be lowered due to the extreme conditions of outer space, for instance, sharing personal data with a new set of humanoids for healthcare and entertainment. Scholars have extensively studied the impact of AI robots on privacy law and data protection, for example, considering what the US experts dub as a “reasonable expectation” of privacy, or vis-à-vis the tenets of EU data protection law, such as the principle of purpose limitation and data minimization (Barfield & Pagallo, 2020; Pagallo, 2013b). In addition to the troubles of both the US privacy law and EU data protection with the use of increasingly autonomous AI systems, it seems fair to admit, however, a further challenge that HRI will increasingly raise in outer space. It concerns what scholars claim is a key feature of our privacy rights; personal choices play the main role when humans modulate different levels of access and control, depending on the context and its circumstances (Nissenbaum, 2004). Trust does not necessarily entail any identifiable direct human interaction and is feasible in HRI and among artificial agents (Castelfranchi & Falcone, 1998; Durante, 2010, 2011; Taddeo, 2010). Since the aim is to create robots capable of engaging “in meaningful social interaction with people” (Duffy, 2003), such engagement entails that how the robot should behave will depend on parameters such as the circumstances of a space mission, the kind of aid that such robot provides, for example, to older adults (Pedell et al., 2022), children (Alabdulkareem et al., 2022), or dependent people (Chandani et al., 2022), down to the social background and gender of the HRI observer (Zlotowski et al., 2015).

By considering work on the social acceptability of AI robots and human preference for their anthropomorphic features (Klüber & Onnasch, 2022; Krägeloh et al., 2019; Liu & Tao, 2022), our conjecture is that standards for HRI and social robotics will not only be adapted to the unique context and extreme circumstances of every space mission, but will be softened by the crucial role that trusts, personalization, HRI privacy, anthropomorphism, and an increasing social acceptance by the public play in this domain of technological innovation. Are there further ways in which HRI in outer space may require another set of legal standards?

6.3 Stricter Standards for Space Activities

We already mentioned in the introductory remarks of this section, the debate on whether and to what extent robotic standards, e.g., the new legal standards for the design and employment of autonomous vehicles, should be stricter than human standards, i.e., the traditional duties and obligations of human drivers (Sparrow & Howard, 2017). The argument, according to which AI systems should replace or interact with humans if—and only if—such systems guarantee stricter standards of legal protection, seems particularly pertinent in this context because it perfectly aligns with certain legal challenges of outer space.

Notably, most sui generis standards of space law, as illustrated above in Sect. 6.1, are stricter than the corresponding terrestrial standards on safety and security, due to the constraints of a hazardous and hostile environment. Yet, the set of duties and obligations that follow the standards of the ECSS system, of the ESA/SCC specification system, etc., must be complemented with the duties and obligations that may concern the use of AI systems and robots that fall under data protection and machinery safety law, cybersecurity, and consumer protection. The protection of the next generation of space explorers, space tourists, or even space colonizers (Jessen, 2017; Marsh, 2006) suggests further complementing the current level of protection with the adoption of additional stricter standards that regard duties of care and information, presumptions, and burdens of proof, as well as solutions that often step away from traditional approaches of tort law, as the compensation funds for victims of cyberattacks illustrated above in Sect. 4. These stricter standards should properly be extended to outer space due to current trends of privatization and democratization of space missions. Individuals would find disproportionate difficulties or costs related to determining the safety of their AI assistant during a mission to Mars. Arbitration and contractual clauses between private parties are insufficient to protect the rights of a new generation of mass space explorers. The limits of tort law here down on Earth also reverberate in outer space and cyberspace. In our view, the stricter sui generis standards of Sect. 6.1 shall be complemented with a new generation of stricter standards for HRI in outer space that benefit from current debate and institutional work in progress in consumer law, AI machinery, or software services.

However, the need for stricter standards in outer space does not mean that robot standards should always be stricter than human standards, or that standards of HRI in outer space should always be stricter than standards of traditional HRI on Earth.

We already noted above in the previous section that some current standards on, e.g., privacy and data protection, could conveniently be softened for HRI in outer space. Likewise, it is questionable that all AI systems and robotic applications should always provide for stricter standards of behavior and legal protection. We can sum up this conjecture in accordance with a new “principle of equality” for outer space. The principle aims to clarify the fourth way in which the use of AI systems in space missions realigns the standards of ethics and the law.

6.4 The Principle of Equality

The “principle of equality” is often associated with the claims of the Front of Robotic Liberation, so that the more we admit the presence of an artificial mind in a machine that affords autonomy and intentional actions, the more likely it is that a new generation of ethical issues concerning the legal personhood of AI robots follows as a result (Pagallo, 2018).

In this context, however, we can set aside sci-fi scenarios, focusing on the class of AI applications for healthcare and the well-being of humans in outer space activities that have been illustrated in the previous parts of our analysis. Consider the use of AI robots for diagnostics and prevention (Alsharqi et al., 2018; Rajkomar et al., 2019; Wang et al., 2016), precision medicine and medical research (Lee et al., 2018; Liu et al., 2018), clinical decision-making and mobile health (Higgins, 2016; Jamal et al., 2017), and healthcare management and service delivery (Davenport & Kalakota, 2019). Such AI assistants, or a bunch of them, will increasingly be an essential part of all crews in space missions. It is not unrealistic to imagine the urgency of a human tourist with fever and nausea, diffuse abdominal rigidity, and sinus tachycardia. What are the standards for the diagnosis and treatment of human peritonitis through the X-rays and robotic arms of the humanoid AI system during an urgency between Mars and the Moon?

The question is not entirely new. Ethical dilemmas concerning the use of technologies in harsh, extreme environments are the bread and butter of scholars working on the design and deployment of humanoid robot surgeons on the battlefield (O’Sullivan et al. 2018). The balance that shall be struck between the risks and even impossibility for humans to intervene and operate on the battlefield and the standards of safety and security provided by such robot surgeons suggests that, under these extreme circumstances, AI robots should not guarantee higher standards than human doctors but rather the same standards of diligence and rates of success in accordance with the “principle of functional equivalence and the principle of equal protection under the law” (Durante & Floridi, 2022). The alternative to this conclusion, that is, stricter robot standards by default, would be counterintuitive and flawed (Sparrow & Howard, 2017). According to this stance, we should let soldiers die without any attempt to save them or let space patients suffer fulminant peritonitis all the way long until any possible return to Earth.

The “principle of equality” between robot standards and human standards entails that human standards should already be satisfactory to be adopted as the golden rule of outer space. It seems reasonable to accept that our humanoid doctor should not provide better standards than the human doctor we could find on Earth, especially traveling toward Mars during an emergency. Therefore, the principle of equality should be complemented by the further principle of ‘equal protection under the law,’ so that equal procedural and substantive safeguards of the law “must be ensured for all those who are the receivers of the effects of AI systems or the performance of emerging digital technologies” (Durante & Floridi, 2022, at 105).

Scholars and institutions have debated the rate of success and levels of risks posed by the manifold applications of AI systems and robotics for medicine and healthcare (Pagallo, 2013a). These analyses on the probabilities of events, their consequences, and costs are crucial to determining the speed at which AI systems can be implemented and adopted in laboratories and hospitals. Empirical evidence on the risks and predictability of the factors that are the origin of uncertainty, helps courts address issues of legal responsibility and liability for the use of such AI doctors. Scholars have often stressed the problems that such implementation and adoption of AI robotics raise in traditional healthcare centers due to the autonomy, opacity, and unpredictability of such systems (Davenport & Kalakota, 2019). Designers and producers of AI systems should not avoid liability for unforeseeable defects “in cases where it was predictable that unforeseen developments might occur” (HLEG, 2019, at 43). However, the aim of this section was not to underestimate the legal challenges of AI but rather to shed light on the uniqueness of those challenges in such a hostile and extreme environment as outer space.

7 Conclusions

The paper has examined four different classes of legal issues in the field of space law, to flesh out and appreciate the impact of AI systems on this realm of legal regulation. The first class regards traditional discussions of space law on coordination and potential antinomies between public international law and national space law, as well as scholarly debate and case law of courts on fault and liability, due diligence, and other legal standards that may affect responsibilities and accountability of both public actors and private companies.

The second class of legal issues concerned the privatization of space. Sections 2 and 4 above drew attention to whether and to what extent current provisions of space law may fall short in tackling the activities of private companies. This class of legal issues on, e.g., third-party liability may of course overlap with the previous class of issues on coordination between public international law and national space law. For example, according to UNOOSA, “national space-law making is… important in view of increasing participation of non-governmental entities in space activities.” Accordingly, “States may consider when enacting regulatory frameworks for national space activities” as concerns “the launch of objects into and their return from outer space, the operation of a launch or re-entry site and the operation and control of space objects in orbit to the design and manufacture of spacecraft, the application of space science and technology, and exploration activities and research.”Footnote 15

The third class of legal issues revolved around the disruption of AI in outer space since the mid-2010s. Sections 3 and 4 illustrate what is unique to the legal challenges of AI either on Earth or in outer space, because of the interactivity, autonomy, opacity, and adaptability of these systems. The disruption varies according to the legal framework which is under scrutiny: criminal law, international humanitarian law, tort law, administrative law, etc. In the field of space law, scholars have mostly focused on the principles and rules of the current regulatory framework that could be affected by the adaptability, unpredictability, and opacity of increasingly autonomous “space objects.” This debate complements traditional discussions on fault, foreseeability, or due diligence that define the responsibilities and liability of both public actors and private companies. One of the main contentions of this paper was, as seen in Sect. 4, that the current debate on the legal issues posed by AI in outer space can benefit from the 30-year-old discussions on how AI systems and smart robots impact the pillars of the law. Legislators can reinforce current strict liability rules for the risks posed by third-order technologies; complement such policies with the extension of tortious liability doctrines, or step away from traditional approaches to the field, endorsing new kinds of responsibility and liability for the laws of outer space.

The fourth and last class of legal issues concerned the uniqueness of the challenges brought forth by AI systems when displayed in outer space. Drawing on the case study of Sect. 5, Sect. 6 illustrated the ways in which the increasing use of AI systems, such as autonomous space objects, humanoid robots, and other artificial companions may realign the thresholds of evaluation that current standards of human-AI interaction, HRI, social robotics, and the law have developed so far. The realignment is fourfold. It regards either the development of new sui generis standards for outer space; stricter or more flexible standards for AI in outer space that can benefit from current debate and institutional work in progress in, e.g., consumer law, AI robot machinery, or software services; down to what we presented as the “principle of equality.” At times, the unique challenges of AI in outer space may recommend the softening of legal standards: this is what scholars increasingly will discuss vis-à-vis issues of data protection, privacy, trust, or confidentiality for HRI in outer space; yet, under other circumstances, current trends of space law – such as promises of mass space exploration, space tourism, or the growing dependence of Earth from space-based services – may suggest complementing current legal standards with stricter rules of liability and responsibility of both states and private companies. The golden rule to take sides in such cases is the principle of equality between robot standards and human standards. If in doubt, we should take into consideration that humans and the law should not react with supererogatory solutions to some extreme challenges of AI in outer space. Rather, the equality principle endorses a non-discriminatory approach between human and artificial behavior with equivalent effects, according to that which EU lawmakers often present as a principle of ‘functional equivalence’ (e.g., Recital 72 of the AIA). The latter can be conceived of as a further response to the legal challenges of what we presented, above in Sect. 3, as the exemplar of every ‘third-order technology.’

The realignment of legal standards for AI and HRI in outer space complements the previous three layers of complexity in this domain of technological regulation. What is unique to AI in outer space can indeed be further appreciated vis-à-vis some shortcomings of current public international law and the role that national lawmakers may play in filling the loopholes of this legal framework through their own provisions in consumer law, machinery safety, or cybersecurity standards for uses of AI in space missions. Likewise, how AI may realign standards in outer space must be related to current trends of privatization with the corresponding legal problems on third-party liability, arbitration clauses, or jurisdiction. By drawing attention to the open issues that experts of space law shall increasingly address over the next years, the paper illustrated how such issues, although diversely intermingled, shed light on the class of normative challenges that the use of AI systems raises only in outer space. The analysis offered some guidance for this new kind of research, drawing on previous work and current research on safety and integrity management of missions and operations in harsh environments (Sect. 6.1); trust and data protection in HRI (Sect. 6.2); liability safeguards for third-order technologies (Sect. 6.3); and, legal methods of technological regulation, such as the principle of functional equivalence between human standards and robotic standards for the use of AI under the extreme conditions of outer space (Sect. 6.4).

The overall intent of the analysis was to complement current discussions on whether and to what extent the standards of conduct in space law should be ameliorated with a further set of issues that regard the development of new standards for manifold AI systems in outer space missions and even colonies. Such standards shall provide a threshold of evaluation for the assessment of benefits and risks that AI systems pose only in outer space. On the basis of new sui generis standards for outer space, stricter or more flexible standards for AI and HRI in space missions, down to the functional equivalence of human and robotic standards, we may dare to say that a whole new universe of legal and moral questions has opened up just to be explored. The speed of technological innovation and human ingenuity will progressively put this normative universe in the spotlight.