1 The subject matter regulated

Before embarking on a discussion of the regulation of artificial intelligence (AI), it is first necessary to define the subject matter regulated.

Defining artificial intelligence is a difficult endeavour, and in fact, many definitions have been proposed, above all during recent years when the issue has been a focus of general attention.Footnote 1 It is sufficient to note, amongst the most recent, the definition contained in the Communication from the European Commission of 25 April 2018, according to which the expression “refers to systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals”.Footnote 2

However, although more than 70 years have passed since it was adopted, the most convincing definition is still nonetheless that proposed by Turing in a famous paper from 1950: rather than defining what intelligence is, which is an extremely tall order, it is more appropriate to consider the outcome to a process. If a process is classified as intelligent when it is performed by a human being, then it can also be classified as intelligent when it is performed by a machine.Footnote 3 Thus, broadly speaking, according to Turing artificial intelligence can be defined as the science of getting computers to do things that require intelligence when they are done by human beings.Footnote 4

Considering a counterfactual approach, one can also avoid having to define artificial intelligence by using the method proposed by Floridi (2022), which is essentially “I know it when I see it”.

Article 3 of the proposal for a European regulation on artificial intelligenceFootnote 5 leans towards a descriptive definition, providing that an artificial intelligence system is: “software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”.Footnote 6

In any case, irrespective of the definition used, it is important to be mindful of the risk of anthropomorphising artificial intelligence, which may arise in particular from its very definition. In fact, the term “intelligence” has a conditioning effect and induces us to think of an intelligent being.

In this case, a metaphor is usedFootnote 7: the artificial intelligence application behaves as if it were intelligent. However, one must be fully aware of both the benefits and the limits to the use of metaphors, in order to avoid metaphors supplanting reality in terms of their importance (Galgano 2010). If this were not done, usage of the term “intelligence” could implicitly presuppose a subjectivity in the artificial intelligence application, thereby surreptitiously conditioning from the outset any reasoning concerning legal subjectivity.

Thus, if an artificial intelligence is deemed to be intelligent when it achieves results that human intelligence might have created, the subject matter regulated will evidently be extremely broad: any process could be regarded as being intelligent, and thus subject to regulation.

We must ask ourselves at this stage whether lawmakers should pursue an approach that seeks to regulate artificial intelligence as a whole, or whether by contrast they should regulate applications of artificial intelligence in specific sectors or individual areas. The proposal for a European regulation on artificial intelligence chose the former option, and in fact pursues a horizontal normative approach. The latter option by contrast has been endorsed by several international organisations, which take the view that it would be preferable to regulate applications of artificial intelligence, or more specifically their effects, in specific areas.

The issue was broadly discussed at the “UNIDROIT-UNCITRAL Joint Workshop on smart contracts, artificial intelligence and distributed ledger technology” held in Rome at the offices of the International Institute for the Unification of Private Law (Unidroit) on 6–7 May 2019. The objective of this workshop was to assess whether any normative action at international level was necessary in relation to smart contracts, artificial intelligence and distributed ledger technology, and if so what specific form that action should take.

It was concluded at the workshop that an optimal approach would be two-pronged: it would be “defensive” in seeking to adapt existing instruments in line with new technologies, whilst at the same time featuring a “proactive” aspect in creating a few simple rules to facilitate the development of this technology in certain specific sectors. It also became apparent during the workshop that one of the few areas in which it would be desirable to put in place rules is that concerning liability for losses caused by artificial intelligence applications.

Similar reasoning could be followed in relation to specific sectors other than contract law. These may include for instance: protection for personal data processed by artificial intelligence systems; applications within the healthcare sector; the usage of AI by the public administration, including in particular the delicate issue of transparency in relation to the algorithm used when taking decisions; the administration of justice; criminal law; and copyright.Footnote 8

The fundamental question underlying the choice in favour of one approach or the other concerns the purpose of regulation: whether the aim is to set out new rules to regulate a new phenomenon or by contrast to limit normative action to the extent strictly necessary to resolve or remove legal obstacles to the usage of technology.

This dilemma, which has persistently arisen within the dialogue between the law and technology and has been resolved in different ways (Finocchiaro 2020), naturally also arises in this case.

However, the proposal for a European regulation was also adopted for geopolitical reasons, which will be discussed below.

2 The geopolitical context

Within the proposal for a regulation on artificial intelligence, the EU chose a horizontal regulatory approach, despite the adoption by the European Parliament of certain resolutions on artificial intelligence in relation to specific issues, such as ethical aspects,Footnote 9 liabilityFootnote 10 and copyright.Footnote 11

According to the Explanatory Memorandum concerning the proposal, “[i]t is in the Union interest to preserve the EU’s technological leadership” (European Commission 2021a, p. 1). In actual fact however, the EU does not have any technological leadership in the field of artificial intelligence, as it is not one of the largest global producers.Footnote 12 On the contrary, as is clarified in the Memorandum, the goal is to “protect the Union’s digital sovereignty and leverage its tools and regulatory powers to shape global rules and standards” (European Commission 2021a, p. 7), which has been the stated objective of the President of the European Commission since she took up office.

Therefore, within the geopolitical contextFootnote 13 the European Union’s strategy is to present itself as a leader in the field of rulemaking and to ensure that the European model becomes a global standard and can be adopted within other parts of the world, the so-called “Brussels effect” (Bradford 2020).

The aim is not to compete with China and the United States in terms of technological production, but rather as regards rulemaking. The Memorandum sets out the goal of asserting European “digital sovereignty”, which has an external aspect in being projected towards the other two global actors, as well as an internal effect on the European Member States. The aim is on the one hand to establish a new model and on the other hand to avoid fragmentation.

This once again confirms the strategic design of European lawmakers, the ultimate purpose of which, in this case, is to build a single European digital market, the normative structure of which is fundamentally expressed in four areas: first of all data protection, through Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (more commonly known as the “GDPR”) and the exploitation of data provided for under the Data Act,Footnote 14 the Data Governance ActFootnote 15 and the proposal for a regulation on the European Health Data SpaceFootnote 16; second, digital services and the digital market, through the Digital Services ActFootnote 17 and the Digital Markets ActFootnote 18; third, as regards digital identity, through the review of the eIDAS Regulation from 2014Footnote 19; and fourth, as regards artificial intelligence, through the proposal for a regulation.

This framework safeguards not only fundamental rightsFootnote 20 but also European “values”, a term that is cited a number of times within the proposal, stressing that the model elaborated is not only normative but also cultural. The aim is to make it clear that it is not only legal rules that are at stake, but also the culture that those rules express.

The model adopted in the USA (duly simplified for the purposes of this summary) is a self-regulatory model based on antitrust law. The Chinese model on the other hand appears to be a dirigiste model based on State capitalism. China is certainly characterised by the fact that it has been increasingly active in producing rules: in the field of data protection it is sufficient to recall the Personal Information Protection Law (PIPL) in force since 1 November 2021,Footnote 21 the Data Security Law (DSL) in force since 1 September 2021Footnote 22 and the Cybersecurity Law (CSL) in force since 1 June 2017.Footnote 23 On the strategic side, the recent creation of the Shanghai Data Exchange (sde) also pursues the objective of creating a “Shanghai Model” for the sale of data. The ambition of the “Shanghai Model” is to resolve the problems that currently hamper the circulation of data and to present itself as a global reference model for eliminating risks associated with legal uncertainty.

Thus, as always, the regulatory proposal also pursues geopolitical objectives, in seeking to extend the scope of regulation. Using a technique analogous to that used by Article 3 of Regulation (EU) 2016/679,Footnote 24 Article 2 provides that the Regulation applies to “providers placing on the market or putting into service AI systems in the Union, irrespective of whether those providers are established within the Union or in a third country”, as well as “users of AI systems located within the Union” and “providers and users of AI systems that are located in a third country, where the output produced by the system is used in the Union”.Footnote 25

3 The approach under European law

The approach taken under European law to the regulation of artificial intelligence is, as mentioned above, a horizontal approach. The limit inherent within this approach is that, since norms are not intended to resolve specific problems or to fill specific gaps within the legal order, they must necessarily be applicable to any sector whatsoever, for instance throughout the healthcare and financial sectors alike. They are not, therefore, ad hoc rules adopted in order to resolve a particular problem or to remove legal obstacles, but rather general provisions setting out an overall framework, a reference context within which artificial intelligence systems operate, both today and in the future.

The proposal for a regulation starts with a blank sheet of paper and sets out a method for dealing with problems that, considered in the abstract, any artificial intelligence application could create, and which European lawmakers intend to prevent. The dangers identified by the Council and the European Parliament, which are also cited in the Explanatory Memorandum on the proposal for a regulation, led to calls to address “the opacity, complexity, bias, a certain degree of unpredictability and partially autonomous behaviour of certain AI systems” (European Commission 2021a, p. 2).

The proposal for a regulation adopts a risk management model that is based on the classification of artificial intelligence systems into three categories, depending upon the risks they entail: systems that create an unacceptable risk, systems that create a high risk, systems that create a low or minimal risk (European Commission 2021a, p. 14).

First, systems that create an unacceptable risk are banned. These include “social scoring” systems and remote real-time biometric identification systems in areas accessible to the public.

On the other hand, low-risk AI systems are subject to various transparency obligations, and the adoption of codes of conduct is encouraged. For example, where AI systems are designed to interact with real people, those people must be informed that they are interacting with an AI system. Similarly, users of systems for identifying emotions or biometric classification systems must inform the natural persons affected by such systems about how they operate. Along the same lines, users of “deep fake” systems that generate or manipulate audio or video images or content with a high degree of resemblance to persons, objects, locations or other existing entities or events, and that have the potential to appear incorrectly authentic or accurate, must be informed that the content has been generated or manipulated artificially.

Finally, most of the proposal for a regulation sets out detailed provision concerning the obligations applicable to the usage of high-risk AI systems. In particular, it is stipulated that any such systems must be subject to an ex ante procedure for assessing their conformity, which will conclude with the award of the CE marking. This procedure requires the implementation and maintenance of a risk management system as well as the adoption of various quality criteria for the datasets used for training, validation and testing. A decisive role will be performed by the technical standards drawn up by sectoral bodies, which European lawmakers have thus vested with considerable rule-making powers (Resta 2022; Veale and Zuiderveen Borgesius 2021).

In addition, high-risk AI systems must be designed and developed in such a way as to ensure traceable operation by the automatic registration of events throughout their lifecycle, which must be sufficiently transparent as to enable users to interpret output and to use it appropriately.

In addition, high-risk AI systems must be designed and developed using human–machine interface tools that enable them to be effectively overseen by natural persons with a view to preventing or minimising the risks to health, safety or fundamental rights. Finally, such systems must be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity throughout their lifecycle.

The proposal for a regulation also envisages a variety of other obligations, including retention of automatically generated logs and registration within the specific database in the European Union, where it is an independent high-risk system.

4 Critical issues

It is envisaged that the proposal for a European regulation on artificial intelligence will attain the status of a global benchmark. It is the firstFootnote 26 normative act that aims to regulate the entire sector, whereas various projects are being pursued by international organisations in order to regulate specific applications of artificial intelligence,Footnote 27 given that in many cases the only appropriate rule-making level is the international level.

The model adopted by the European Commission is a model based on risk management, which starts with the classification of three possible classes of risk, and then goes on to specify the methods for containing the various risks associated with them: in the most serious cases, prohibiting the systems; for high-risk systems, adopting a complex and detailed procedure for the ongoing management and monitoring of risks; and for lost-risk systems, providing for transparency obligations.

The European Union is certainly to be commended on having inquired into the problems raised by artificial intelligence and for having attempted to intervene. However, some critical issues are unavoidable (Abriani and Schneider 2021; Floridi 2021; Resta 2022; Smuha et al. 2021; Tampieri 2022; Veale and Zuiderveen Borgesius 2021).

First, the system sketched out by the proposal for a regulation appears to be quite inflexible. The classification of artificial intelligence systems into different types of risk will inevitably be subject to review, as provided for in the regulation itself. New systems not yet contemplated under the proposal will be developed, and new methods for implementing existing systems will be created, thus altering the risk level.

Of course, the proposal for a European regulation on artificial intelligence is not the first instrument in which European lawmakers have established a model based fundamentally on risk management. In fact, the most recent and most significant instance of this has been the European regulation on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (i.e. the GDPR), mentioned above. However, in this instrument the risk management system is subject to the principle of accountability, that is the principle whereby the controller must take appropriate action to give effect to the principles and provisions set out in the regulation taking account of the specific characteristics of the processing, and must demonstrate that it has taken this action, thereby enabling the risk management model to be adapted continuously by the controller.

Accordingly, under the GDPR, the controller is the person who is obliged to manage and assess risks, whereas under the proposal for a European regulation it is the legislator that decides which systems are high-risk and how the risk that they create should be dealt with, based, moreover, on an extremely broad definition of artificial intelligence systems.

Thus, a first critical issue consists in the fact that artificial intelligence applications, including future applications, are and will be governed according to the perspective of today, with the result that the normative system is not sufficiently dynamic to adapt to future developments in artificial intelligence.

Second, it must be considered that a risk management model entails a considerable administrative burden: from the drafting of plans, certificates and notices to the production of documentation and markings, the cost of which is borne by companies regardless their respective sizes and the specific type of AI application at issue.

This mistake, i.e. using the same solution for very different subjects and areas of the law, has already been committed in other areas, for instance as regards specifically the law on data protection, which has in actual fact more recently been reconsidered with reference to the principle of accountability, thereby enabling the action that must be taken to be adjusted in line with the specific facts of each individual case.

The obligations laid down by European lawmakers will naturally have different effects depending upon the subjects at which they are directed. Large companies will presumably not have any particular problem in managing documentation, certification, marking and other requirements. On the other hand, small companies, and in particular start-ups, will be confronted with considerable financial burdens as a result of the obligations provided for by European lawmakers. Inevitably, the burdens and costs associated with protection will differ depending upon the subject that is liable for them. There is, therefore, likely to be a high risk for small companies, start-ups and researchers, which are present in large numbers in this sector in Italy; the European legislation has left to Member States the task of establishing spaces for normative experimentation (sandboxes) and taking action to support SMEs.Footnote 28

A second critical aspect is thus the adoption of a formal, onerous and undifferentiated approach.

However, from a substantive point of view, the most important question is whether the proposal for a regulation provides a response to the dangers (from discrimination to partisanship) that resulted in its initial adoption and whether it adequately protects European rights and values, from human dignity to privacy, which it constantly invokes.

The protection provided by European lawmakers is a general and abstract form of protection and consists in the risk management model provided for under the regulation, along with the prohibitions included within it. No provision has been made for new instruments that people can use, either acting individually or organised collectively, in order to ensure that protection is more effective and swifter. Thus, the protection mechanisms will be largely those provided for under the GDPR, such as the right of access, the right to erase data and the right to data portability. In addition, the substantive principles applicable will be those provided for under Regulation (EU) 2016/679 on data protection: data quality, accuracy, minimisation, relevance, limitation of storage, integrity and confidentiality.

Engagement with the more delicate substantive issue of the formulation of a new model for liability has been deferred. The issue was previously raised by the Commission, which proposed the potential creation of legal personality for artificial intelligence applications.Footnote 29 However, the proposal for a regulation only states that the provider of a high-risk AI system must guarantee that the system complies with the requirements. The proposal for a “directive on adapting non-contractual civil liability rules to artificial intelligence” (AI Liability Directive) published on 28 September 2022Footnote 30 follows a minimum harmonisation approach, and is limited to harmonising only those fault-based liability rules that govern the burden of proof for persons claiming compensation for damage caused by AI systems.

To date, leaving aside its strategic value in geopolitical terms, which constitutes its real basis, the proposal for a European regulation essentially sets out an administrative framework for the marketing of artificial intelligence products. The general framework will, therefore, have to be completed by technical rules and standards, which will take on fundamental importance and will be constantly updated.

On a substantive level, the proposal for a regulation is limited to prohibiting artificial intelligence systems that entail unacceptable risks, also referring—either implicitly or explicitly—to the general principles that now lie at the heart of European law of dignity, transparency and privacy, without, however, stipulating specific arrangements to govern their application to artificial intelligence systems, or any new and more effective forms of protection for individuals.

If the European Union truly wishes to protect fundamental rights and European values, and indeed to turn them into global benchmarks, it cannot limit itself merely to provide for certification according to technical rules adopted by standardisation entities. If it wishes to assert European leadership on the global stage, it will have to go beyond an organizational and managerial approach and engage with the core, genuinely unresolved issues. Certain problems require solutions that are not merely formal and need to be dealt with resolutely in order to complete the regulation of artificial intelligence. These undoubtedly include, amongst others: the establishment of a new general model for liability for losses caused by artificial intelligence applications that goes beyond the minimum harmonisation approach embraced in the proposal for a regulation and the proposal for a directive; the adoption of new legal solutions to enable transfers of personal and non-personal data to artificial intelligence applications in a manner that fully respects fundamental rights; and the identification of new effective and rapid instruments for protecting against discrimination. This is a very wide-ranging commitment to substantive rights and the instruments for giving effect to them, which is required in order to complete the regulatory framework. At this point in time, only the European Union is able to engage with this challenge.

5 Disclosure

The author has no relevant financial or non-financial interests to disclose. The author certifies that she has no affiliations with or involvement in any organisation or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript. The author has no financial or proprietary interests in any material discussed in this article.