Introduction

The purpose of this article is to explore democratic and epistemic challenges of ethicisation in the context of emerging technologies, with a specific focus on how the notions of under-reliance and over-reliance of ethics expertise can unpack the processes at play. By using biotechnology and the EU process of bio-patents and the publication of ethical guidelines for AI development as illustrations, it is demonstrated how ethicisation may give rise to democratic and epistemic challenges that are not explicitly addressed in discussions on the political use of ethics expertise.

EthicisationFootnote 1 refers to how technological and other issues are commonly framed as ethical, and how ethics is perceived to be a tool to resolve conflicts of interests, dilemmas or controversies (Bogner 2010; Cavaggion 2019). A tendency to frame scientific and technological phenomena in ethical terms can be observed in different areas in society; some even call it a hegemonic trend (Bogner 2009; Petersen 2011). For instance, in the wake of gene technological and other biotechnological advances in the twentieth century, the field of bioethics emerged as a response to ethical dilemmas and controversies that this development gave rise to (Evans 2002, 2012). As a consequence, bioethics expertise is called for in an advisory function as well in the clinic (Fox et al. 2007, p. 19) as at the policy level (Trotter 2002). Medicine may be the most obvious field to deliberately reflect on and act upon the ethical consequences of its research and practice, but ethical dimensions are addressed in practically all areas in society, such as engineering (Bowen 2014), accountancy (Kumarasinghe et al. 2021) and education (Falkowski and Ostrowicka 2021); professions in general have their own ethical codes (Illinois Institute of Technology); public authorities and private corporations produce policy documents outlining the moral values on which their activities are based and their employees should adhere to (Andersson and Ekelund 2021); and policy-makers ask for ethical input on morally sensitive issues, especially so in the context of regulation of new technologies (Bogner 2009).

A case that has left its mark in this regard is the controversial question of biological patents in the European Union (EU) in the late twentieth century, that was resolved only when ethics expertise was called for (Tallacchini 2015; Busby et al. 2008). A recent example that to some extent draws on the bio-patent case in its attitude to the role of ethics is the ongoing process on regulation of artificial intelligence (AI) in the EU, a process in which input from ethics expertise was the point of departure (EGE 2018). The publication of ethical guidelines for AI development by both public and private entities (Jobin et al. 2019) is another example of how ethical dimensions are addressed in relation to AI development; the last five years have witnessed the dissemination of some hundred such guidelines.Footnote 2

Apparently, essentially different areas, facing dissimilar problems, increasingly find ethics as a guide to acquire morally better solutions to ethical problems, but also to problems that may involve other aspects or to issues that may not primarily be ethical in character.Footnote 3 As an implication, moral philosophers and other ‘ethics experts’ are frequently asked for advice and may exert influence on ethical guidelines and policies that are supposed to govern peoples’ behaviour in a desirable way (Trotter 2002). Calling for experts on ethics when ethically complicated questions need to be handled is undoubtedly a good thing, helping us to uphold central virtues (cf. Tiesenkopfs et al. 2019), but there are also problems connected with ethicisation. The turn to ethics may not always be a sign of a sincere aspiration to moral performance, but can also be a strategic move to gain acceptance for controversial or sensitive activities (Hagendorff 2020; Busby et al. 2008; Littoz-Monnet 2015); ethicisation may depoliticise questions and thereby constrain room for democratic participation (Urbinati 2014; Hedlund 2014); and defining questions as questions of ethics may individualise structural problems in a way that removes them from political consideration (Petersen 2011). Nevertheless, ethicisation, and the ensuing call for ethics experts, suggests an expectation of confidence in ethics and ethics expertise, and that ethical guidance is an effective way of governing people’s behaviour in a morally desirable way. However, this confidence can also be the reverse of the ethical medal, as too little or too much confidence may give rise to epistemic and democratic concerns.

As will be demonstrated, a further implication with ethicisation is two interrelated problems with confidence that are only implicitly addressed in the ethics expertise literature. The first problem is what I denote under-reliance of ethics expertise, namely the risk that ethics advice is asked for, but for some reason is not listened to. Ignorance or resistance among those who are supposed to implement the ethics or lack of mechanisms of reinforcement can sometimes explain why ethics advice is not put into effect, but ethics may also be used strategically to get acceptance for controversial or sensitive activities and thereby run the risk of being perceived as something like an ethical alibi. The second problem is what I call over-reliance of ethics expertise and would occur when the fact that ethical advice has been provided sends the message that the issue at hand is exhausted, with the risk that other urgent aspects will be overlooked. Both under-reliance and over-reliance constitute epistemic as democratic challenges with the political use of ethics expertise. From an epistemic point of view, the confidence in ethics expertise could be challenged, whereas in different ways. From a democratic point of view, under-reliance of ethics expertise may give rise to legitimacy concerns, while over-reliance may narrow the elucidation of an issue and, as an effect, limit the event space for potential alternative interpretations.

To illustrate how ethicisation may give rise to problems of under-reliance and over-reliance of ethics expertise in the context of emerging technologies, I will look into two cases with potentially huge implications for society: biotechnology and AI. Both biotechnology and AI open up the prospect of enhancing and even creating life, and so give rise to existential questions of life itself—the metaphor ‘playing God’ is used as criticism as well towards gene technology (Evans 2002) as towards AI (Gent 2015)—and so bring ethical issues to the fore. And even though living machines are highly unlikely in the foreseeable future, AI technology gives rise to intriguing questions of safety, privacy, discrimination and other issues that are frequently addressed in ethical terms.

In the following, I clarify what I mean with ethicisation and reliance of ethics expertise, define a concept of ethics expertise and problematise the use of ethics expertise in democratic decision-making. Next, I discuss epistemic and democratic worries that are commonly brought up in the experts and democracy literature, and argue that ethicisation requires that we pay attention also to under-reliance and over-reliance of ethics expertise. After that, I situate biotechnology and AI in the context of ethicisation, and analyse the two cases with respect to under-reliance and over-reliance of ethics expertise. Finally, I discuss how under-reliance and over-reliance of ethics expertise add to the democratic and epistemic worries that the political use of ethics expertise gives rise to.

Some Initial Clarifications

‘Ethicisation’ is used with different meanings and can denote, for instance, the invocation of immaterial values in constitutional law (Cavaggion 2019) or the process of working out a code of ethics in an organisation (Carroll 2015). In this paper, ethicisation is understood as the tendency to frame problems as ‘ethical’ at the (possible) expense of other aspects, such as economic, legal or political aspects, and more specifically to the tendency of policy-makers to frame issues with emerging technologies in ethical terms.

Ethicisation should not be confused with reliance on ethics expertise. Whereas ethicisation is about the tendency, or trend, to turn to ethics, reliance on ethics expertise could be seen both as a presupposition for ethicisation—without reliance on ethics expertise, policy-makers might not turn to ethics to begin with—and an effect of ethicisation: the fact that problems are defined as ethical problems is reason to expect that experts in ethics are involved. It is this latter aspect of reliance on ethics expertise that is of main interest in this study.

Reliance on ethics expertise is thus in this paper primarily studied as an effect of ethicisation and is understood as the ‘neutral core of trust’, meaning that we ‘rely on someone to do or ensure something when we judge them to have the relevant competence, motivation, and opportunity’ (De Fine Licht and Brülde 2021). Hence, reliance on ethics experts is about them having the relevant competence in ethics, i.e., that they are in fact ethics experts; that they are motivated to do what they are expected to do in a given situation, e.g., providing clarifications on ethical matters; and that they have the opportunity to do so, e.g., that they are asked to act as ethics experts in a governmental committee. While not denying that reliance on ethics expertise is a condition for ethicisation in the first place, ethicisation also has effects on reliance on ethics expertise. One such effect is that ethicisation can lead as well to under-reliance as to over-reliance of ethics expertise. This is the central focus of this paper, which focuses not on why ethicisation occurs, but on how ethicisation affects reliance on expertise and the epistemic and democratic challenges that this gives rise to.

Depending on how the turn to ethics is played out, ethicisation can lead to under-reliance or over-reliance of ethics expertise. Under-reliance of ethics expertise points to situations when ethics advice is called for but not necessarily taken into account. In such cases, ethics may have been used as legitimation, to send signals of confidence or to put the public at ease. In other words, under-reliance depicts situations when ethics becomes an alibi for controversial decisions or lines of action. Certainly, this presumes strategic moves of carelessness regarding the impact of ethics, but under-reliance can occur also without anyone intending or aiming for it. Over-reliance, on the other hand, applies to a prioritisation of ethics that makes it difficult for other perspectives to come through, with the risk that important aspects will not be regarded. The extent to which over-reliance occurs depends on the scope of the commission given to the ethics expertise. When ethics is defined broadly and includes, for instance, socioeconomic or redistributive aspects of the issue at hand, the risk that such aspects are overlooked is smaller compared to when ethics is more narrowly defined. On the other hand, an extensive definition of ethics suggests that more aspects will be dealt with by the ethics experts and thereby not necessarily give room for citizen concerns (Littoz-Monnet 2021, p. 31). Hence, over-reliance may give rise to democratic concerns in different ways. In cases when the input of ethics is not provided by experts in ethics, but this is to be expected, we could face as well under-reliance and over-reliance of ethics expertise: under-reliance insofar as the outcome of the job is deemed inferior; over-reliance insofar as the expectation is that ‘proper’ experts are doing the job. What then, is a ‘proper’ ethics expert? What do we mean by ethics expertise?

Ethics Expertise

Ethics expertise has emerged as a category of specialised knowledge to be used in value-based questions, and we could expect decision-makers to resort to ethics expertise in situations when it is impossible to reach an agreement on controversial and sensitive policy choices via democratic processes (Littoz-Monnet 2015). As for the special kind of expertise that scholarly philosophers contribute, we need to consider whether we could equate the expertise of experts in ethics with the expertise of experts in climatology, computer science, epidemiology or other scientific disciplines. Philosophers themselves tend to disagree on the matter (Moreno 2006), but considering the fact that something designated ethics expertise plays a role in policy-making processes, it is important to clarify what this term refers to. While the specific expertise required to be denoted an ethics expert in, for instance, clinical settings is a contested issue (Scofield 2018), scholarly training in moral philosophy and certification by philosophical and bioethical institutions arguably are signs of some expertise in ethics, and practical training in applied ethics could confer the expertise required to be considered an ethics expert (Sanchini 2015).

To some degree, the scholarly debate on the societal role of moral philosophers has been formulated in terms of the possibility of such expertise (e.g., Tong 1991; Parker 2005; Sanchini 2015). However, this way of putting the issue is partly misguided, as the problem is not whether moral philosophers could be experts—they certainly can, on many things. Rather, the debate concerns their role as advisers on questions of morality, and their alleged expertise in knowing the right morality. Supposed moral expertise would refer to the expertise in making judgements about right and wrong in an absolute sense. So understood, a moral expert is only possible if moral judgements are objective. This is a position of moral realism, holding the view that there are knowable and objective moral truths (Dancy 2011), a position that is widely disputed (see, e.g., Sinnott-Armstrong 2019). However, as Waldron (1999, ch. 8; see also Yoder 1998) argues, even if moral realism is correct, the fact of moral disagreement makes it impossible to identify the moral experts and hence, moral realism is irrelevant. Thus, the problem with the debate is how expertise is (implicitly) understood. As long as ethics expertise does not refer to some kind of moral expertise (cf. Friele 2003), ethics expertise is possible in the same way as epistemic expertise.Footnote 4

Drawing on Hedlund (2014), I contend that it is fruitful to make a distinction between on the one hand ethics expertise, referring to expertise in ‘providing systematic analysis of ethical concepts and positions, presuppositions of such positions and the relations and the distinctions between them’ (2014, p. 285) to illuminate thinking and to encourage an informed ethical debate (Nussbaum 2002; Brock 2006), and on the other hand moral expertise, referring to expertise in evaluating the rightness of moral judgements. Such a distinction recognises the ethics expert as the counterpart to the expert in epidemiology or climatology. They are all examples of ‘specialists in a well-delimited and commonly accepted competence area’ (Hedlund 2014, p. 284), and have or are perceived to have cognitive authority (Turner 2003). Further, this authority is based on an ability to justify claims ‘above and beyond the sphere of subjective opinion and belief’ (Grunwald 2003, p. 111). If we also apply this way of encircling the characteristics of ethics expertise, the ethics experts possess knowledge by virtue of which they can speak with authority as to which conclusions follow from different moral theories without taking a stand on which of these conclusions is preferable, have expertise in clarifying and illuminating moral problems and in presenting alternative positions and justifications for those positions, that is ‘clarifying expertise’ (Hedlund 2014, p. 288). So understood, ethics experts can contribute to stabilising and legitimising disagreements in policy contexts (Bogner 2010).

While we can conclude that it is reasonable to accept the notion of ethics expertise, the way that ethics experts are used in policy-making on emerging technologies may give rise to epistemic and democratic worries of the same kind as the use of epistemic expertise. Next, some such worries will be outlined.

Epistemic and Democratic Worries

Like all of us, experts can make cognitive mistakes, and they may be driven by ideologies or biases, pointing at the epistemic worry whether experts contribute to more rational and informed decisions. However, this worry builds on the assumption that policy-makers use expert knowledge in a rational way (Boswell 2009). Whatever the quality of the expert advice, just because knowledge and research results are made known to decision-makers does not mean that it will be incorporated in political decisions. For instance, despite the vast knowledge about the need to reduce emissions of CO2 into the atmosphere to put a break on global warming, emissions continue to increase.Footnote 5 Political decision-making is an act of balancing different interests and values (Douglas 2009), and climate change is just one example that illustrates that there are also other considerations than facts and knowledge that political decision-makers need to attend to. In addition, decision-makers sometimes make use of expert knowledge symbolically, to send signals of rationality, as ammunition in contested issues, or to substantiate a position that is already taken (Boswell 2009). All this contributes to the epistemic worry about the rationality of political decisions (although given the logic of politics, this is to be expected), but it also gives reason to democratic worries.

From a democratic point of view, the political use of expertise may primarily have implications for equality (Turner 2003). When experts participate in democratic processes, there is a risk that their involvement may go beyond the ‘objective’ conveyance of ‘speaking truth to power’ (Wildavsky 1979). Personal values and preferences might affect the professional judgements of experts or give certain values a fact-like status (Dzur 2008; Evans 2006), and because of superiority in their specialist field, experts have good opportunities to define the problems to be considered. As is widely recognised in policy theory, problem definition is an influential tool to guide the direction of the policy process (Barbehön et al. 2015). Talking about ethics experts, it is not improbable that they have a propensity to call attention to ethical aspects of problems.

While I do not intend to downplay the importance of ethics and the role of ethics experts to help decision-makers to orient themselves in the morality of policy issues, ethicisation points to something slightly different, namely, as outlined above, the tendency to frame questions as ‘ethical’, with the risk that other perspectives will be omitted. This is not to say that there are no ethical aspects in policy issues, but if policy issues are understood solely in moral terms, there is a risk that societal and structural aspects are overlooked (cf. Dowding 2020). As I will show in the analysis of biotech and AI, disregarding certain facets of an issue is an important part of the problem of over-reliance of ethics expertise.

Calling for ethics expertise in political decision-making processes warrants attention also from a democratic point of view. As outlined in Hedlund (2014), an advisory role of ethics experts in policy-making processes gives rise to some particular issues. One worry concerns the case when advice implies ethical recommendations, meaning that ethics expertise is in fact expected to act as so-called moral expertise, as these concepts are defined here.Footnote 6 Giving recommendations is a normative endeavour, and that is a problem for democracy for the same reasons as it is a problem for democracy if any experts have more say on normative considerations than do non-experts. In a democracy, we should expect value questions to be dealt with in public deliberations in which different positions are settled on the basis of democratic principles, not to be answered by expertise. Although recommendations are just that, and not decisions, if decision-makers ask for ethical recommendations, it is not unreasonable to believe that they will base decisions about value questions on expert authority, that is, on alleged moral expertise. From a democratic perspective, the problem with this would be twofold: firstly, the notion that questions of value could be resolved by expert knowledge, and secondly, the supposition that there is something like moral expertise that could provide the right value.

Ethicisation in the context of emerging technologies adds to these epistemic and democratic worries by giving rise to under-reliance and over-reliance of ethics expertise, which will be demonstrated by the cases of ethicisation in biotechnology and AI.

Ethicisation in Biotechnology and AI

Biotechnology and AI are particularly interesting from the perspective of ethicisation, not only owing to the great emphasis on ethical aspects of these fields in policy and other contexts, but also due to the special characteristics of these technologies. Biotechnology can be described as the integration of natural sciences and engineering sciences that makes use of living organisms to develop or create different products, which gives hope to providing solutions to urgent societal problems such as curing diseases (Wahlberg et al. 2021) and mitigating climate change (Show et al. 2021). With the possibility to make changes in genetic material, biotechnology has advanced rapidly, but the modifying of genes in living organisms, including human embryos, has raised concerns and criticisms (Lima and Martínez 2021). The question of patenting genes in the late 1980s and 1990s in the EU is an example of how public worry puts ethical values up front.

AI is commonly defined as non-organic systems that can think and act rationally and similarly to humans (Russell and Norvig 2010).Footnote 7 More specifically, AI can be described as ‘systems that display intelligent behaviour by analysing their environment and taking actions—with some degree of autonomy—to achieve specific goals’ (AI HLEG 2019, p. 1). These understandings point to characteristics of AI that distinguishes it from other technologies: its ability to ‘learn’ and to act autonomously.Footnote 8 While these attributes have given rise to a lively discussion about AI agency and implications for responsibility (Laukyte 2017; Gunkel 2017), this paper focuses on how implications of AI technology—agentic or not—are framed as ethical.

It will now be demonstrated how ethicisation of biotechnology and AI may lead to under-reliance or over-reliance of ethics expertise.

Under-Reliance of Ethics Expertise

The risk that ethics advice is asked for but not taken into account is a case of under-reliance of ethics expertise. This was present in the process of bio-patenting in the EU at the end of the twentieth century. Worries about competitiveness vis-à-vis biotech industries in Japan and in the US had put ‘enormous pressure on the EU to harmonise the disparate provisions of its member states concerning biotechnology patents’ (Jasanoff 2005, p. 219). Originally proposed in 1988, Directive 98/44 on legal protection of biotechnological inventions was not adopted until 1998, and the main reason for the extended process was the controversial ethical issues concerning patent law and biotechnology (Busby et al. 2008). Citizens worried about patenting of human biological material, and in several member states there was a critical debate and objections against the extent of the protection of gene patents (Jasanoff 2005).

In response to the intense debate, the Commission established the Group of Advisers to the European Commission on the Ethical Implications of Biotechnology (GAEIB), later replaced by the European Group on Ethics in Science and New Technologies (EGE). The task of GAEIB/EGE was to advise the Commission on the ethical aspects of biotechnology and to keep the public informed (EC 1997). The group did, however, only get the limited role to provide guidance on basic and general ethical principles, not on particular inventions or patents (Plomer 2006, p. 118). Moreover, the European Patent Office (EPO) pointed to methodological weaknesses in the recommendations, such as leaning on opaque concepts, and failure to advert to the pivotal distinction between human embryonic stem cells and other type of stem cells (Plomer 2006, p. 199). Nevertheless, the EGE came to play a crucial part in the realisation of the directive on biotechnology patents (Mohr et al. 2012; Busby et al. 2008). As put forward by Busby et al., ‘without the ethical stamp of approval from the GAEIB/EGE, Directive 98/44 might never have been adopted’ (2008, p. 814). In other words, the ethics expert group was needed to legitimate the patent directive. In that respect, it could be argued that ethics expertise was used symbolically.

When technical expertise is used to legitimate a political decision, it has the symbolic function of demonstrating rationality, especially in contested issues with a high level of salience (Boswell 2009). The patent issue was no doubt a contested issue with a high level of salience, and the public struggle was about moral values. For the European Commission, explicitly addressing ethical challenges would facilitate acceptance of the benefits of biotechnology and thereby ensure a single market for its products (Tallacchini 2015; Busby et al. 2008). The mandatory procedural step to consult the EGE whenever a directive touches upon values provided an ‘aura of democratic legitimacy’ (Tallacchini 2015, pp. 164–165), and the ‘appropriate advisory structure’ (CEC 1991, p. 18) provided by the establishment of GAEIB/EGE could fill the symbolic function of demonstrating ethical rationality. However, ‘ethics’ was loosely defined (Jasanoff 2005), and the kind of expertise required for appointment to the group was not necessarily scholarly expertise in ethics (Plomer 2006, p. 122). Rather, members should be ‘recognized experts’ (CEC 1991, p. 16), serving in a personal capacity and independently from any outside influence (Plomer 2006, p. 122); among the then six members, one was a philosopher, and the others came from law and genetics (Jasanoff 2005).Footnote 9 Both the criteria for appointment and the independence of the group have been questioned (Plomer 2006, pp. 123–125), and there have been concerns that ethics committees of this kind are prone to ‘political capture’ (Plomer 2006, p. 126). While inclusion of only ethics experts would arguably imply a larger risk of certain aspects being overlooked than if the group was more diverse, as in this case, the very commission to consider the ethical aspects delimits the aspects to be handled.

All this indicates a risk that ethics may have been used as an alibi for contested issues, which is a sign of what I denote as under-reliance of ethics expertise. The emphasis of the importance of including ethics together with the fact that being an ethics expert was not a requirement, indicates that the Commission did not rely on ethics expertise to the extent that they could have done. On the one hand, the explicit task was to consider ethical aspects. On the other hand, the group designated this task was not entirely composed of ethics experts, as this notion is used in this paper. This suggests that the Commission was not primarily interested in the best possible ethical advice, but that the aim could have been to bypass or tame the public unease with bio-patents (cf. Littoz-Monnet 2021).

Ethics as alibi for risky endeavours is also observed in the rapidly developing field of AI, which besides the many advantages and streamlining it provides to society, also gives rise to serious concerns such as safety (Juric et al. 2020), responsibility (Hedlund 2022; Persson and Hedlund 2021), privacy (Carmody et al. 2021), discrimination and inequality (O’Neill 2016), CO2 emissions and depletion of minerals and other natural resources (Crawford 2021) and the power of global companies (Zuboff 2019). This has triggered actors from academia, professional communities, politics and business to turn to ethics to ensure that AI is ‘deployed in a manner that respects dearly held societal values and norms’ (Rességuier and Rodrigues 2020, p. 2), but it has also been suggested that the discourse of ethical AI is used strategically ‘to avoid legally enforceable restrictions of controversial technologies’ (Ochigame 2019). In any case, we are witnessing frequent publication of ethics guidelines on the development and application of AI systems.

Guidelines are a form of soft law that can have various functions such as to codify common practices or to change professional norms, and may or may not have a substantial influence (Sossin and Smith 2003). According to the inventory by Algorithm Watch, 173 ethical guidelines on AI were established by April 2020, and given that, for instance, China (Houweling 2021) and UNESCO (UNESCO 2021) among others since then have published ethics guidelines, we could expect that the total number of ethics guidelines on AI at the global level is close to or exceeds 200.Footnote 10 No matter the exact number of ethics guidelines, the point is that harmful consequences with this emerging technology is framed in terms of ethics. What effects would this ethicisation have on reliance of ethics expertise?

Certainly, the many different guidelines can be confusing or even give rise to ‘ethics shopping’ (EGE 2018, p. 14), but as pointed out by Hagendorff (2021), many of the AI ethics guidelines build on previously published guidelines and thereby echo one another regarding the topics included and how they are approached. Thus, the guidelines are creating a consensus on which ethical values should guide AI development and implementation (Fjeld et al. 2020; Fukuda-Parr and Gibbons 2021). However, as there is a lack of enforcement mechanisms to reinforce the normative claims, AI ethics guidelines hardly have any influence on behavioural routines of practitioners (Hagendorff 2020, 2021; Rességuier and Rodrigues 2020). Weak enforcement may also be a reason for why ethics is so appealing to many companies and institutions, which formulate their own guidelines in an effort to evade regulation and suggest that self-governance is sufficient (Hagendorff 2020; Rességuier and Rodrigues 2020).

This approach to ethicisation can be seen as a way to use ethics to legitimate an ongoing business, and without requirements to really follow the guidelines, ethics is arguably used as an alibi for a large-scale, profit-generating business that is not solely beneficial for society and for which vulnerable groups pay a higher price than already favoured groups. Ethics as an alibi is clearly a case of under-reliance of ethics expertise, whether ethics experts are actually involved in the putting together of the guidelines, or not. In the former case, qualified ethical considerations are not put into practice. In the latter case, ethics may not have been taken seriously in the first place; the endeavour of AI business actors to publish their own ethics guidelines could be a case of both.

In fact, some of the ethical principles suggested in the guidelines do play a role in practice. For instance, the almost ubiquitous principles of transparency and explainability, meaning that decisions taken by autonomous systems should be comprehensible for humans, is a question that AI developers are paying a lot of attention (Fjeld et al. 2020). While this may be promising from an ethical perspective, there are also some issues with this practice. However desirable the principles of transparency and explainability may be for AI development and implementation, they are examples of ethical values that are most easily operationalised mathematically and thus can be implemented by technical means (Hagendorff 2020, 2021; Greene et al. 2019). Since this itself need not be a problem, it is problematic if technical feasibility decides which ethical values are regarded, and how (Greene et al. 2019). One reason for concern is the risk that existing practices of resolving technical problems in AI research and development that are done anyhow are presented as ethical measures (Hagendorff 2021). Supposing that would be the case,Footnote 11 it would be an example of how ethics is used as an alibi, which arguably may lead to under-reliance of ethics expertise. Another possibility is that the ethical framing is too weak, allowing for too much influence of industry interests. Still, we have a case of under-reliance of ethics expertise, although not based on the ethical framing as such. However, as ethicisation has become a main approach to deal with controversial emerging technologies, it could be presumed that concerned actors adapt to this practice and learn to formulate their issues in ethical terms. If non-experts in practice are behind allegedly ethical recommendations, there clearly is a risk of under-reliance of ethics expertise. Another worry, that will be further elaborated below, is the risk that these very values will be the only ones considered, at the expense of other important values that are not so easily met with some technical fix.

Transparency and explicability, like other ethical values promoted in most ethical guidelines (e.g., privacy, fairness, trust), can also be criticised for how they are represented in the guidelines, and regarding downsides of the principles as such. Criticism can also be directed at the guidelines for omitting aspects of AI development and implementation that are harmful for society, aspects that arguably would be included in frameworks that are labelled ‘ethical’. I will now illustrate how these shortcomings can reveal how ethicisation may lead to over-reliance of ethics expertise and run the risk of delimiting the space for public deliberation, thereby constituting a challenge for democracy.

Over-Reliance of Ethics Expertise

As numerous reviews of AI ethics guidelines show, there appears to be an emerging consensus on which ethical values to address. Although other values also occur, a common denominator seems to be: transparency and explainability, fairness and non-discrimination, privacy and trust (Hagendorff 2021). Notwithstanding that each of these values are relevant and important, ethical guidelines on AI are criticised for approaching them in a manner that ‘limits what ethics can achieve’ (Rességuier and Rodrigues 2020, p. 1).

Consider explainability. While it is reasonable that individuals obtain explanations of autonomous decision-making processes, especially in the event of unwanted consequences, it is not evident how such explanations should be constructed to be meaningful to the user. Moreover, as pointed out by Hagendorff (2021), the value of explainability could be questioned. A mere description of a causal process in a technical artefact would most probably require further explanation. Additionally, an explanation is not a justification whether a decision is appropriate or acceptable. Finally, even if perfectly explainable AI systems were possible to achieve, that would not ensure that they were not then used for unethical purposes. These issues are seldom mentioned in AI ethics guidelines.

Or take algorithmic bias, referring to how machine-learning systems discriminate based on patterns in the data that the systems are ‘trained’ on (Johnson 2021; Eubanks 2018). The problem of biases in training data is commonly met by pointing to the need to more complete and diverse datasets, but larger datasets could also make surveillance easier and increase the violation of privacy, another value that is recurring in AI ethics guidelines. However, how conflicts between different values should be resolved are rarely addressed in these documents (Jobin et al. 2019).

While criticism of the guidelines within academia may be a sign of under-reliance of ethics expertise from agents who are themselves knowledgeable or experts in ethics, there are other factors that speak for over-reliance. Although ethics guidelines on AI have received a lot of public attention, criticism from academic society may not always reach the public. Furthermore, the guidelines are presented as guidelines provided by experts. For instance, it is not implausible that an audience assumes that the experts in a ‘group of independent experts’ (EC 2019), as the European Commission put it when they announced the publication of its ethics guidelines on AI, have the relevant expertise. In this context, apart from expertise in AI, expertise in ethics should be expected. Moreover, these guidelines constitute a basis for the Commission’s continuing work towards regulation of AI (EC 2021), indicating that they will come to play some role in development and implementation of AI within the EU. Whereas some ethical values discussed in the guidelines are insufficiently considered, the—reasonable—expectation that ethics expertise has played a (considerable) role in the elaboration of ethics guidelines that will influence potential legislation could lead to over-reliance of ethics expertise.

In addition, there is a risk of over-reliance on ethics expertise from the perspective of the European Commission (in this case), but in another way. As we have seen, ethicisation of an issue does not necessarily lead to the requirement that ethics experts, as this notion is understood here, should do the job, and many of these guidelines are not produced by ethics expertise or at least not mainly by ethics expertise. For instance, of the 52 members of the Commission’s AI High Level Group on Ethics, only three were scholarly ethicists.Footnote 12 This could be a sign of over-reliance of ethics expertise insofar as a small minority of ethicists was expected to be able to provide all the necessary ethics considerations. Certainly, we cannot know to what extent these ethics experts really had an influence on the group’s work, but considering that a majority of the members of this group came from the AI industry and/or were experts on technological aspects of AI (European AI Alliance, 2022), it is not unlikely that the ethics experts did not dominate the deliberations.

Over-reliance of ethics expertise could also be the case when the occurrence of ethics recommendations or guidelines gives the impression that the consideration of an issue is exhausted. As several reviews of AI ethics guidelines have pointed out, many relevant values are rarely discussed or not mentioned at all. The point of departure is how AI technology could be made ‘ethical’ to be accepted and trusted in society, not the other way round, that is, how AI technology could be used to attain a peaceful, sustainable and just society, or whether these systems should be developed or applied in the first place (Hagendorff 2021). Moreover, the locus of ethical scrutiny is the technical design of AI, not the business of AI (Greene et al. 2019).

One thing that is rarely paid attention to is how dependent the development of AI is on manual work (Crawford 2021). For instance, datasets that are used to ‘train’ AI systems by supervised machine-learning methods need to be prepared by humans (Hagendorff 2020). This is dull, exhaustive labelling work, often performed by low-wage labour at clickwork factories, where ‘working conditions are as bad at the market tolerates’ (Hagendorff 2021, p. 8). Such negative effects on third parties are rarely touched upon in the ethical guidelines (Hagendorff 2021).

Another aspect that is not always recognised is how AI development affects the environment. In their review of 84 ethical guidelines, Jobin et al. (2019) observed that 70 guidelines did not include any principles on environmental sustainability. Although the notion of ‘cloud’ computing suggests a lack of materiality, the material conditions for AI are significant and AI technologies contribute considerably to the carbon footprint and environmentally harmful waste (Hagendorff 2021). From the perspective of ethics, it is noteworthy that these aspects do not occupy more of the guidelines.

Indeed, several other important implications of AI are pointed out as rare or absent in the ethical guidelines,Footnote 13 but for the question of reliance on ethics expertise, the examples mentioned here may be sufficient to illustrate that just because some experts have provided ethics guidelines, this does not necessarily mean that all relevant issues have been considered. It could be objected that concerns of aspects such as working environment and climate are not specific to AI and are covered by other guidelines and regulations. However, presented as ethics guidelines, they may give the impression that the ‘ethics’ of AI is exhausted, implying that climate, working environment and other omitted concerns are not ‘ethical’ in character. As mentioned, issues of technical design get much attention, while questions on social situatedness are rarely discussed, and even if all (technical and other) requirements of AI ethics principles would be fulfilled, AI applications can still be used in ways that are harmful for the environment and for people. These are arguably ethical questions relating to AI development, the omission of which could lead to over-reliance on ethics expertise.

If important aspects of the downside with AI are omitted in ethics guidelines intended to direct the development and application of the technology, and if we assume that the elaboration and formulation of these guidelines give room for ethics expertise, then there is a real risk of over-reliance on ethics expertise. On the other hand, if the ethics is shallow or narrow, as the omissions imply, there is reason to ask who is doing the ethics. Although we would expect moral philosophers and other scholarly ethicists to provide the ethical reasoning, the fact that many of the guidelines are provided by the AI industry and AI professionals (Hedlund 2022), and that a public institution such as the European Commission includes only a fraction of ethicists in its expert group with the task to provide ethics guidelines for AI, may be a sign of something else. However, the expectation that proper ethics experts stand behind the ethical reasoning, together with the insufficient or, sometimes, inadequate depth and ‘teeth’ (Rességuier and Rodrigues 2020) of the AI ethics guidelines, gives room as well for under-reliance as for over-reliance on ethics expertise.

While both under-reliance and over-reliance on ethics expertise give rise to democratic concerns, it should be noted that in the EU process behind the development of the AI guidelines, there was an explicit ambition to include actors from different areas in society ‘to allow for broad and open discussion of all aspects of AI development’ (EC 2018), and within the guidelines it is recommended that ‘stakeholders are involved throughout the AI system’s life cycle’ (AI HLEG 2019). Such processes open up the possibility to raise other concerns and potentially impact policy-making discussions. In that respect, the EU ethical guidelines are not the final word about principles for AI development. However, as pointed out above, problem definition is a powerful tool to guide the direction of a policy process, and the initial framing of AI development in ethical terms has to a certain extent set the stage for those who enter at later stages.

Conclusions

This paper has demonstrated how ethicisation in the context of emerging technologies adds to epistemic and democratic challenges with experts in policy-making. Ethicisation, understood as the tendency to frame problems as ethical issues, implies increased use of ethics expertise, or at least an expectation that this is the case, and I have showed how ethicisation may lead to under-reliance and over-reliance of ethics expertise. Under-reliance of ethics expertise was discussed in the case of bio-patents, in which turning to ethics became a symbolic strategy when questions of patenting body parts met resistance, and in the case of AI, when it was shown how ethics could be used in a symbolic way to legitimate an ongoing business. The AI case also illustrated the risk of over-reliance of ethics expertise when it could be expected that all relevant ethical implications are exhausted, although important aspects have been omitted or insufficiently considered. In both cases, the fact that it was not always ethics experts who were providing the ethics could lead to under-reliance as well as over-reliance of ethics expertise.

Ethicisation in the context of emerging technologies directs attention to ethical approaches that are being developed and some kind of cultural consensus of what is desirable in the making. A democratically problematic aspect highlighted in the analysis is the risk that ethical guidelines give the impression that the issue is exhausted, what I call over-reliance on ethics expertise. But perhaps it is to expect too much that the first ethics guidelines on a new technology should cover all thinkable issues. Provided that they are not the final word, ethics guidelines could potentially open up for public deliberation. However, the power of defining the problem cannot be disregarded. Ethicisation as it is practised in the cases analysed here gives this advantage to experts (although, as we have seen, not always to experts on ethics).

While I do not intend to devalue ethics as an important contribution to policy-making on emerging technologies, there is a risk that the increasing turn to ethics blocks the view for societal, environmental and other important aspects, and especially so when the focus is on making the technology ethical, as seems to be the case for AI. This focus may give the impression that society should adapt to the technology, rather than asking what society needs, and develop technology for those needs. Certainly, the symbolic use of ethics that this study has pointed to and that may have under-reliance of ethics expertise as an effect is discouraging, but to end on a positive note, a more optimistic interpretation from the perspective of ethics expertise is that the very reference to ethics is a sign that ethics is deemed important.