Introduction

Artificial Intelligence (AI)—systems that detect and collect information from the environment and process it to solve calculations or complex problems—is an area of strategic importance for economic development and a key component of the Digital Agenda for Europe. However, the EU’s High-Level Expert Group on Artificial Intelligence (AI HLEG) recognizes that “while offering great opportunities, AI systems also generate certain risks that must be managed appropriately and proportionately” (European Commission, 2019). Therefore, it is important not only to invest in education to drive technological progress; it is also imperative that the new generations of professionals are able to shape technology in a way that respects European values. Thus, Higher Education (HE) should be tasked not only with preparing young people with advanced skills to program applications, but also to prepare all students to understand the implications of AI and to influence its ethical use.

The EU’s digital strategy emphasises the need to train professionals that can “shape technology in a way that respects European values” (European Commission, 2019), explicitly acknowledging that professionals in a wide variety of fields will require knowledge of responsible AI, which has derived in great progress in defining what does Trustworthy AI mean (Floridi, 2019; Chatila et al., 2021; Kaur et al., 2022). In addition, although there is interest from HE institutions in including ethics in programs, the teaching of Responsible AI in HE still remains highly understudied and disorganised. The main goal of this paper is to assist educators in HE to introduce responsible AI into their educational programs in line with the European vision. The findings are presented in the form of recommendations both for educators and policy incentives, translating the guidelines into HE teaching and practice, so that the next generation of young people can contribute to an ethical, safe and cutting-edge AI made in Europe.

To that end, we take as a starting block the High-Level Expert Group’s Guidelines on Trustworthy AI (hereon referred to as HLEG guidelines), which outline the necessary requirements for responsible and trustworthy development in the EU. It must also be noted that the present results are based on the work implemented in the context of the Erasmus+ project “Trustworthy AI”,Footnote 1 a two-year project that aimed to introduce a new education framework and resources for teaching AI emphasizing the ethical and values aspects of these techniques and systems. In particular, the interviews used as base material for this work’s findings were conducted as part of the first deliverable of the project, the Learning Framework for Teaching AI in HE (Aler Tubella & Nieves, 2021).

The rest of the paper is structured as follows. “Background” Sect. provides the background of this work, while the methodology used in the two phases (literature review and qualitative expert interviews) is described in “Methodology” Sect. The results are detailed in “Results” Sect., deriving into a set of recommendations for both teachers and policy-makers in “Recommendations” Sect. “Discussion and future work” Sect. discusses and compares the obtained results and recommendations to those of other works or frameworks and, finally, concludes and points to future work and limitations of the current one.

Background

In 2018 Europe decided to take the global lead in establishing a strategy for AI based on ethical principles, appropriate legal guardrails and responsible innovation. In 2019 the High-Level Expert Group on AI (HLEG) developed the Ethics Guidelines for Trustworthy AI (European Commission, 2019), defining three components which should be met throughout the system’s entire life cycle. According to them, trustworthy AI should be:

  1. 1.

    Lawful—respecting all applicable laws and regulations;

  2. 2.

    Ethical—respecting ethical principles and values;

  3. 3.

    Robust—both from a technical perspective while considering its social environment.

The European Commission (EC) also endorsed seven principles that must be taken into consideration to assess AI systems in the White paper on AI and included them in its proposal for an AI Act; namely:

  1. 1.

    Human Agency and Oversight;

  2. 2.

    Technical Robustness and Safety;

  3. 3.

    Privacy and Data Governance;

  4. 4.

    Transparency;

  5. 5.

    Diversity, Non-discrimination, and Fairness;

  6. 6.

    Societal and Environmental Well-being; and

  7. 7.

    Accountability.

To operationalize Trustworthy AI, the EC introduced the HLEG trustworthy AI guidelines, which enumerate and clarify the seven requirements listed above. Transforming the HLEG guidelines into specific skills for the actors involved in AI development has been highlighted by the European Commission as a natural step in creating an “ecosystem of trust” for the flourishing of European AI (European Commission, 2020).

It is worth mentioning that the HLEG trustworthy AI guidelines are domain-independent regarding a particular AI method. New recommendations regarding Trustworthy AI are appearing in the literature; nevertheless, most of those newer recommendations identify AI principles that are either a subset or an extension of the AI principles introduced by the EC (Li et al., 2023).

One of the most significant challenges to operationalize the HLEG guidelines is to turn the trustworthy AI requirement list sectorial. This means to make the trustworthy AI requirement list suitable to the demands of a public or industrial sector, e.g., Public Safety, Health Care, Transport, Army, etc.

In terms of education, one can recognize that professionals from different sectors require also have their own interpretation of the the HLEG guidelines, but the interpretation of the guidelines requires understanding the different concepts that are introduced by the ethical guidelines.

Recently, leading Members of the European Parliament proposed to include all requirements in the AI Act to underline its main objective of ensuring that AI is developed and used in a trustworthy manner. The AI Act and the HLEG guidelines (upon which the former builds) will have a great impact on public and private parties developing, deploying, or using AI in their practices. The HLEG guidelines laid the groundwork for building, deploying, and using AI in an ethical and socio-technically robust manner, providing a framework for trustworthy AI. The AI Act further refines this framework by introducing numerous legally binding obligations for public and private sector actors, both large and small, that need to be met during the entire lifecycle of an AI system. This means that all organisations, need to start building a thorough knowledge base of trustworthy AI.

Let us mention that by reaching the vision of trustworthy AI other worldwide initiatives can also be impacted positively. For instance, some authors have analyzed the role of AI in achieving the Sustainable Development Goals (SDGs) (Vinuesa et al., 2020). In Vinuesa et al. (2020), the authors concluded that AI has the potential to shape the delivery of all 17 SGDs, both positively and negatively. In fact, through a consensus-based expert elicitation process, Vinuesa et al. identified that “AI can enable the accomplishment of 134 targets across all the goals, but it may also inhibit 59 targets” (Vinuesa et al., 2020). Hence, one way to prevent the negative side effects of AI in achieving the Sustainable Development Goals (SDGs) is to educate professionals in the basic principles of trustworthy AI.

Methodology

Literature review

The goal of the systematic review was to analyse the relevant literature in order to answer the following questions:

  1. 1.

    What competences and learning objectives are identified when teaching ethical aspects in HE?

  2. 2.

    How are these competences taught and evaluated?

These questions were particularly selected to address the challenges of incorporating the HLEG guidelines into the classroom. Indeed, whereas the more technical aspects have more established pedagogical methods, the more abstract ethics-related content lacks concrete learning objectives and strategies.

We conducted the literature search on Scopus, to obtain results in a variety of disciplines. We used the following search terms, to be found in title, abstract or key:

ethics AND teaching AND “higher education” AND ( competence OR competency OR skills ).

We limited our search to publications from 2015 onwards and retrieved a total of 50 publications on 09/02/2021 at 15:13. Four papers were not accessible at the time of analysis. We focused on individual research output, so excluded one book, one editorial, one extended abstract and six review articles (either paper reviews or curricula reviews). Finally, we manually excluded eight papers whose abstract did not mention anything related to teaching skills related to ethics and five papers were removed upon further reading for lack of relevance (either not focused on HE or not focused on teaching aspects related to ethics). The final output is 24 papers which we analysed (Mackenzie, 2015; Miñano et al., 2015; Trobec & Starcic, 2015; Biasutti et al., 2016; Mulot-Bausière et al., 2016; Gómez and Royo, 2015; Sánchez-Martín et al., 2017; Galanina et al., 2015; Gokdas & Torun, 2017; DeSimone, 2019; Fernandez & Martinez-Canton, 2019; Lapuzina et al., 2018; Rameli et al., 2018; Riedel and Giese, 2019; Aközer & Aközer, 2017; Oliphant & Brundin, 2019; Brown et al., 2019; Jones et al., 2020; Bates et al., 2020; Dean et al., 2020; Zamora-Polo & Sánchez-Martín, 2019; Ibáñez-Carrasco et al., 2020; Sahin & Celikkan, 2020; Noah & Aziz, 2020). Figure 1 depicts a flow diagram of the selection of paper.

Fig. 1
figure 1

Flow diagram of the selection of papers for review

The papers analysed cover a wide variety of subject areas. Based on author affiliation 17 countries are covered, as well as 12 subject areas as indexed by Scopus (Table 1).

Table 1 Distribution of reviewed publications by subject areas

Publications identifying specific competences are few, although many mention that explicitly identifying competences is a pressing educational need. On the other hand, most publications propose teaching methods, with a strong focus on learning with a social component of debate and participation between students. For this reason, non-traditional teaching methods like case-studies and role-playing seem to be often proposed and studied. Much of the literature consists on exposing or evaluating how certain teaching practices were incorporated to teach ethics in specific degrees or modules. Much of the literature emphasises the importance of incorporating different dimensions of ethics into their education. In particular, professional ethics as it refers to codes of conduct is mentioned often.

A discussion of the topics emerging from the literature and how they tie in with the needs identified by educators follows in “Results” Sect.

Qualitative interviews

With the goal of exploring the state-of-the-art of Trustworthy AI in HE, we developed an interview protocol (see Appendix). The interview protocol consists of 8 sections:

  1. 1.

    Introduction: Introduction of the people involved in the meeting, project—background, consent issues, description of process, and follow-up steps.

  2. 2.

    Introduction of the Purpose of the Interview: Slide deck and agenda for the interview (approximate timeline for each section).

  3. 3.

    Education case: The interviewee is asked to describe an education case that will be the focus of the questioning. This case can be real or prospective, and is meant to provide contextual information.

  4. 4.

    General perspectives: This part is a generalised discussion of Trustworthy AI and its role within HE at large. Interviewees are asked to comment on aspects such as national or local education strategies, practices in current education, resources being used currently, and minimum incentives that should be there for promoting trustworthy AI in HE.

  5. 5.

    Questions on the Assessment List: This section is focused on the HLEG assessment list, which translates AI principles into an accessible and dynamic checklist intended to guide developers and deployers of AI in implementing such principles in practice. Interviewees are asked about its usefulness for education purposes, about its specific inclusion in courses, and about types of support needed to teach it.

  6. 6.

    Ordering of the Requirements: Participants are asked to rank the 7 Requirements in order of their application/importance in their chosen education case (with 1 being the highest). While doing so, interviewees are asked to define in their own words what each requirement entails, and to justify the ordering that they choose.

  7. 7.

    Questions for Specific Requirements: For the highest and lowest ranked requirement, interviewees are asked which aspects of it are already considered in their use case, and how. Additionally, they are asked to comment on which questions around this requirement are the most valuable ones for trustworthy AI education.

  8. 8.

    Closing remarks: Final wrap-up of the interview, voicing of any additional comments.

Partners from ALLAI, Universidad de Alcalá, Maynooth University and Umeå Universitet completed a training session in order to unify how the interviews were conducted. Interviewees were therefore asked the same questions in the same manner. Additionally, interviewers reported on their interviews through a standardised form, identical for each partner. All answers were contrasted in a qualitative analysis.

A total of 11 interviewees were selected for their involvement in HE, whether through governance, program management or teaching. Interviews were conducted over a period of 6 weeks. The experts, with affiliations in five different countries, brought use cases spanning medicine, law, computer science and social sciences (see Tables 2, 3 and 4). The responses from expert interviewees inform the recommendations made in this framework and shed light on the current state of Trustworthy AI in education.

Table 2 Interviewees’ geographical location as given by affiliation (note that some interviewees have several affiliations)
Table 3 Interviewees’ profile
Table 4 Disciplines of the use cases selected by the interviewees

Results

This section is structured around emerging themes from the qualitative interviews.

Lack of explicitness

Table 5 Choice of most important rquirement to include in education
Table 6 Choice of least important requirement to include in education

The guidelines in their current form are valued by all interviewees for setting down clear requirements and bringing clarity to their meaning. However, their inclusion in education raises concerns: respondents raise that the length and technical nature of the documents is not suitable for all disciplines and education levels; the wording and scope of the guidelines remains abstract while using technical terminology. In addition, a perspective on how each requirement applies to different disciplines is missing: for example, how should legal scholars evaluate robustness, as opposed to computer scientists? Furthermore, interviewees raise concerns that the translation of different technical terms may bring different perspectives depending on which language of the guidelines is being studied. A frequent point made by the experts is that different courses may touch upon only a few of the requirements, therefore not looking at the guidelines as a whole, but rather focusing on a few specific relevant requirements. Overall, several respondents note that the key aspect of the guidelines is the focus on the human behind the system and they emphasise the value of conveying to students that the responsibility and ethical obligations of AI development lies on those involved in the process.

There is significant consensus among the experts (100% of respondents) that all requirements are relevant, but that their significance and importance for a course varies depending on the topic and the area. For this reason, there was no significant agreement when they were asked to rank the seven requirements in order of importance: each education case elicited different rankings depending on the application area and topics tackled in the course (Tables 5 and 6). Despite the variation in rankings, “Transparency” stands out as being ranked the least relevant in 54.5% of the education cases. The reasons for this rating, however, are very disparate. Some experts believe that transparency is encompassed by other requirements, while others think that other requirements use more basic concepts that are easier for beginners. Additionally, some experts see the rest of the requirements as more fundamental.

In terms of which of the HLEG requirements are already being taught in current courses and programs, 60% of the experts interviewed stated that some requirements are currently included in their education case. A common thread in the expert interviews is that while different requirements are certainly covered in education, they are not explicitly related to AI or to the HLEG guidelines. Many report that topics related to trustworthiness are addressed due to their relevance, but oftentimes a deliberate effort is not made to establish an explicit relationship to the HLEG requirements. Likewise, there is no deliberate effort to include the totality of the requirements in education. When asked which requirements are currently covered in their educational case (Table 7), there is a big disparity, with Privacy and Data Governance topics being taught in 90% of the cases brought by the experts, while Societal and Environmental Well-being and Accountability are each only taught in 40% of the cases.

Table 7 Current inclusion of requirements in education

Need for concrete learning objectives related to RAI

The questions raised on each requirement, although specific to each, have two common threads: recognition and implementation. On one hand, there is a strong call for teaching students how to recognise whether a requirement is being followed. On the other hand, many questions revolve around technical methods for trustworthy AI development, e.g., record-keeping methods, privacy preserving data collection methods, explainability methods.

Echoing this idea, when assessing competencies related to incorporating ethical and social dimensions into HE in all disciplines, results of our literature review indicate an emphasis on dual competence (Brown et al., 2019; Noah & Aziz, 2020; Trobec & Starcic, 2015; Zamora-Polo & Sánchez-Martín, 2019; Sánchez-Martín et al., 2017): developing technical competence alongside the ability to understand and act according to ethical and social expectations. Although discussion on specific learning outcomes is notably absent, three learning goals are prevalent for demonstrating mastery of social and ethical competencies:

  • Ethical appreciation/sensitivity: Identifying and understanding the ethical and moral dimensions of a situation.

  • Ethical analysis: Deliberating about actions, how they relate to ethical guidelines and codes of conduct, and their possible consequences.

  • Ethical decision-making/Applied ethics: Selecting and implementing a course of action in response to ethical reasoning.

Some examples of how these competences are identified can be found in Table 8. These findings squarely align with syllabus analysis, where it has been found that the most common sought outcomes for teaching Tech Ethics are variations on “recognize/critique/reason” (Fiesler et al., 2020).

Table 8 Literature quotes reflecting the three identified levels of competence

Thus, both the literature analysis and the expert interviews reveal the need for two different levels of expertise. The first is the call for educating on how to recognise whether a requirement is being followed and, if so, how it is being followed.. This competence corresponds to Ethical appreciation/sensitivity as identified in the literature review: understanding what a requirement means in the context of a certain application. In fact, this type of question universally applies to students as citizens, as it allows for identifying and adopting trustworthy technology. In addition, it provides an initial maturity level in terms of understanding the HLEG Requirements.

The second competence identified across requirements corresponds to technical methods for trustworthy AI development. There is consensus across interviewees about the need to teach concrete methods for explainability, traceability, data collection, impact assessment, etc. This necessity closely relates to Ethical analysis and Ethical decision-making as identified in the literature: knowledge of the available technical tools is necessary to be able to make an informed choice and implement it. Since the relevant techniques vary greatly depending on the topic and area of the course, it is particularly important to explicitly include in the curriculum which topics and methods will be addressed (Bates et al., 2020). A selection of the topics that the experts believe are necessary to teach with respect to each requirement is shown in Table 9.

Table 9 Topics identified as relating to each requirement

Lack of implementable use cases

Uniformly across interviews, experts mention that they do not use any specific resources related to Trustworthy AI. Rather, some mention the use of current topical examples, case studies, and relevant literature. A popular way to introduce Trustworthy AI concepts in the classroom is to discuss current social concerns with the applications of the technology studied in the course. In fact, 6 out of 11 interviewees believe that it would be valuable to relate the abstract requirements set up by the guidelines to more practical terms—either through real-world examples, industry participation or concrete tools to experiment with different concepts in class.

Similarly to the interview results, the literature review reveals that ethical and moral reasoning skills are often taught through student-led methods focused on encouraging reflection and debate amongst students: case studies (Lapuzina et al., 2018), role playing (Trobec & Starcic, 2015), debate (Brown et al., 2019), experiential learning (Ibáñez-Carrasco et al., 2020). This observation aligns with findings from other literature reviews, which emphasise the prevalence of games, role playing and case studies in Engineering and Computer Science Education (Hoffmann & Cross, 2021). Although wide-spread, there is however dissent in the literature, where some advocate for more formal training in, e.g., moral philosophy, in contrast to student-led activities (Aközer & Aközer, 2017).

The teaching strategies most often used to teach Trustworthy AI aspects influence the type of resources currently available. Indeed, diverse bodies have developed openly available case studies on AI ethics, such as Princeton University,Footnote 2 MIT,Footnote 3 the Markkula Center for Applied Ethics at Santa Clara University,Footnote 4 University of WashingtonFootnote 5 and UNESCO.Footnote 6

When interviewees were asked about what type of resources would be useful for integrating Trustworthy AI in HE, several themes emerged. Firstly, 5 out of 11 interviewees coincided in asking for use cases. Interestingly, there was significant consensus on the type of use cases deemed necessary: they should be realistic and implementable. Indeed, using real cases brought directly from the industry that mimic situations where graduating students may find themselves in is seen as important for the usefulness of these scenarios. In contrast with the literature, where use cases are often used for reflection and debate, several interviewees suggested that use cases should be used for practical exploration, where they can implement and “play with” different solutions.

Another frequent mention is a need for material to aid in evaluation, i.e., exercises or assignments with a grading guide that can be directly used for assessing students. Indeed, several interviewees shared the difficulty of evaluating knowledge of abstract concepts. Across the literature review and the expert interviews, there is a noticeable lack of consistent methodologies for assessing soft competences such as ethical and social awareness or understanding and application of guidelines and codes of conduct. The assessment methods uncovered in the literature review mostly rely on self-assessment (Ibáñez-Carrasco et al., 2020; Mulot-Bausière et al., 2016) or experts’ perception of student’s knowledge without explicit grading criteria (DeSimone, 2019; Lapuzina et al., 2018). On the other hand, interviewees either report no explicit assessment of competences related to Trustworthy AI or include it as part of the overall assessment of programming projects.

Distance between policy and practice

Uniformly, most interviewees state that they are not aware of any specific policy strategies to include aspects of Trustworthy AI into education, either at the level of their institution or at a national level. Simultaneously, interviewees mention that the topic of Trustworthy AI is gaining importance in their organisation, and that they are actively considering how to include it in their programs. This mismatch indicates that even though Trustworthy AI is being introduced into HE, the effort is mainly driven by the educators themselves rather than by organisational or national strategies. This approach presents the risk of a mismatch in competences between programs in different HE institutions, as the introduction of Trustworthy AI into educational programs is carried out independently rather than within a coordinated strategy. This is in contrast to current strategies for AI, which highlight the need to roll out Trustworthy AI education at a national and European level. For example, the Spanish National Strategy for Artificial Intelligence states that “it is essential to ensure that students, teachers, public sector personnel, the workforce in general and society as a whole receive appropriate preparation for and training in AI, from an ethical, humanistic and gender perspective” (Ministry of Economic Affairs, n.d.). Similarly, the Swedish National Approach to AI states that “Sweden needs a strong AI component in non-technical programmes to create the conditions for broad and responsible application of the technology”. Thus, despite a full acknowledgement of its importance, no coordinated effort to incorporate Trustworthy AI education is reaching educators at this time.

In terms of policy needs and incentives to boost the introduction of Trustworthy AI in HE, interviewees delivered a big variety of needs. A big point of consensus (5 out of 11 interviewees) is the need for investing in Trustworthy AI expertise so that educators are equipped to teach these topics: this can take the form of investing into multidisciplinary training or boosting the hiring of experts in Trustworthy AI aspects to participate in education. This idea aligns with interviewees mentions of lack of time to get acquainted with the topics in order to be prepared to introduce them in the classroom. When asked about risks, there was significant consensus amongst interviewees in mentioning that there is the risk of introducing Trustworthy AI in HE before institutions are able to prepare, i.e., before there is enough expertise in the topic to be able to teach it competently.

Several interviewees mention the importance of allowing for flexibility in the degree structure to allow for the inclusion of broader interdisciplinary topics. They mention that current policies strictly constrain the learning goals of different programs and leave little room for interdepartmental collaboration and interdisciplinarity. In contrast, Trustworthy AI is seen as a topic that would benefit from student’s exposure to different disciplines, calling for policy incentives that will encourage interdisciplinary learning. These thoughts align with recent calls for transversal education that allows for interdisciplinarity when considering ethics in technology (Raji et al., 2021). In addition, another relevant risk mentioned is that it is important that students from all disciplines should be able to learn about Trustworthy AI. Whereas it seems that it is starting to be a focus in STEM, there were some concerns that other disciplines may not be exposed to the topic in HE. Several interviewees emphasised that aspects of Trustworthy AI are important for students not only as future professionals, but also as citizens. In this sense, they emphasised the benefits of training a generation of professionals that will possess interdisciplinary knowledge and be able to communicate with professionals from other disciplines on the terms of Trustworthy AI.

Recommendations

For the teachers

Both the literature analysis and the expert interviews reveal the need for different levels of expertise. This is justified in two ways. Firstly, social and ethical issues require different types of skills to recognise, debate, and act upon. Secondly, the interdisciplinary nature of the topics within Trustworthy AI means that students in different disciplines do not necessarily need to reach the same learning outcomes for each requirement: for some, knowledge and identification of potential issues will be necessary, whereas for others it will be necessary to master technical methods and solutions. In addition, as highlighted in “Need for concrete learning objectives related to RAI” Sect., concrete learning outcomes are notably absent in the literature and in practice, making it harder to assess concrete learning goals. Since the relevant topics and techniques vary greatly depending on the topic and area of the course, it is particularly important to explicitly include in the curriculum which topics and methods will be addressed (Bates et al., 2020) (as echoed in “Lack of explicitness” Sect.).

Thus, the recommendations for educators are as follows:

  1. 1.

    Explicitly include HLEG requirements in courses when relevant.

  2. 2.

    Bridge the gap between requirements and course content by being explicit about which requirements are being tackled in the course and how.

  3. 3.

    Explicitly include Trustworthy AI development methodologies in curricula (e.g., record keeping procedures, privacy-preserving data collection methods, explainability tools).

  4. 4.

    Set out clear learning outcomes that describe the level of proficiency expected from the student.

Recommended learning outcomes for each individual requirement are:

LO1:

Appreciation: Identifying the applicability of the requirement in different contexts and its different dimensions for different stakeholders.

LO2:

Analysis: Deliberating about possible implementations of the requirement, how they relate to ethical guidelines and codes of conduct, and their possible consequences.

LO3:

Application: Selecting and technically implementing a solution in response to analysis in terms of the requirement.

For the policy-makers

When it comes to the effective introduction of Trustworthy AI in HE, both the incentives needed as well as the perceived risks strongly hinge on coordinated policy efforts. As reviewed in “Distance between policy and practice” Sect., current efforts to incorporate Trustworthy AI education seem to be mostly at an individual educator level. This increases the risk of unequal outcomes, as well as putting educators in the position of having to come up with Trustworthy AI curricula themselves.

Additionally, the introduction of Trustworthy AI in the classroom requires resources: time to develop curricula and learn about the topic, experts involved in education, and a multi-disciplinary perspective. All of these aspects can only be made possible through strong policy incentives that provide these resources.

Thus, we strongly encourage policy-makers to consider the following recommendations when translating the national strategies into practice:

  1. 1.

    Coordinate the introduction of Trustworthy AI in curricula through national education strategies, ensuring a uniform adoption.

  2. 2.

    Incentivise HE institutions to obtain the relevant expertise needed to teach Trustworthy AI, both by investing resources in training for educators and by hiring experts.

  3. 3.

    Incentivise interdisciplinary collaboration in education by valuing it in the curriculum and introducing credits for it.

Discussion and future work

Artificial intelligence is an area of strategic importance for the economic and social development of the European Union and a key component of the Digital Agenda for Europe. However, at the same time that AI systems offer immense opportunities, they create risks and may contravene our democratic or ethical principles in areas such as the agency of human beings, inclusion (or its inverse, discrimination), privacy, transparency and more. HE plays an important role in contributing to cutting-edge, safe, ethical AI. As Borenstein and Howard (2021) write, “if the technology is going to be directed in a more socially responsible way, it is time to dedicate time and attention to AI ethics education.”

Both AI researchers and the organisations that employ them (mostly HE institutions) are in a unique position to shape the security landscape of the AI-enabled world. In this context (Brundage et al., 2018), highlight the importance of education, ethical statements and standards, framings, norms, and expectations, and how “educational efforts might be beneficial in highlighting the risks of malicious applications to AI researchers”. In the previous sections, a list of recommendations have been formulated for both teachers and policy-makers which arise from this first attempt at translating the HLEG guidelines into HE teaching and practice.

The current findings are aligned with previous studies; for instance, Gorur et al. (2020) explored the ethics curricula of 12 Australian universities’ Computer Science courses and found that, while there is wide variation in the content of ethics courses in Australian CS curricula, there tends to be a greater emphasis on professional ethics than on philosophical and macro-ethical aspects, which appear to be neglected; including the HLEG requirements in courses when relevant might alleviate such omission. It is a limitation that we do not have data on the interviews according to discipline. Thus, it would be interesting to investigate in future work which requirements are already being included in each discipline specifically.

As Aiken et al. (2000) identified years before the current focus on Ethical AI, it is essential to look beyond the student-teacher relationship, so all stakeholders should be considered as any of them might be a source of vulnerabilities or asymmetries that would increase the risk of the system. The guidelines can be the starting point for that discussion and, thus, the appreciation, analysis and application of Ethical AI case studies are another relevant recommendation for teachers.

While many of the HLEG requirements might already be partially reflected in existing legislation, no legislation covers all of them in a comprehensive manner (Smuha, 2020), let alone in a consistent way across all European countries. In the same line, many of the ethical risks raised by the development and use by AI are context-specific which requires not only a horizontal approach but also a vertical one that holistically takes into account all the possible risks. As Smuha (2020) notes, only a very limited number of initiatives take such approach into account and even the White Paper on AI by the European Comission fails to mention the domain of Education (European Commision, 2020), which also points to the actions that are still needed in form of future policies, legislation and national (or European) education strategies.

As Dignum writes (2021), “more than multidisciplinary, future students need to be transdisciplinary—to be proficient in a variety of intellectual frameworks beyond the disciplinary perspectives”. This requires a set of capabilities that are not covered by current education curricula and calls for a redesign of current studies that should start by considering (and incentivising) the relevance of interdisciplinary collaboration.

Thus, several relevant questions arise to address for the very near future. Firstly, it is important to understand how education in Trustworthy AI can be rolled out in a coordinated and consistent manner within the EU. Even though the HLEG’s recommendation represent a broad European perspective, the perspective on different values will vary for each society: any effort to standardise education will need to give enough leeway for diverse perspectives.

Secondly, the question of how to “teach the teachers” arises. As many interviewees report, the competence to teach Trustworthy AI is not necessarily found within HE institutions already, not least due to the different aspects that each requirement encompasses. A strong push to develop material, courses and pedagogical guides that cover all seven requirements is therefore essential, and to the best of our knowledge there is a lack of studies on how best to introduce HE educators to these topics.

In parallel, it is important to question the idea of simply training computer science educators to provide familiarity with these topics. Indeed, a highlight between both literature and interviews is the need for interdisciplinary education. These findings echo (Raji et al., 2021), who highlight the “exclusionary default” of a purely Computer Science lens on education. For this reason, a key question to explore is how to effectively offer interdisciplinary education in Trustworthy AI. This will require understanding “the level of interaction between different disciplines and constructive alignment” (Klaassen, 2018), and in particular it will require interdisciplinary pedagogical research.