This special section offers a selection of current debates about responsible and ethical innovation, with a view to understanding the future direction of debates about norms around innovation. The section arises from a workshop on ‘The Ethics of Innovation’ that took place at the University of Warwick in 2017.

The article by Sven Ove Hansson provides a qualified defence of a much-discussed principle relating to innovation, the precautionary principle. The spirit of the principle is that there should be a bulwark against unknown problems arising from the deployment of new and untested technologies. The principle urges, roughly, that, where there is a threat to health or the environment, precautionary measures should be taken, even where there is scientific uncertainty about the existence or extent of the threat. Various specifications of the principle exist in policy documents and beyond, and it is sometimes framed as a version of the principle ‘better safe than sorry’. This might seem excessively cautious, given the benefits that technology can provide, including benefits to health and the environment. Hansson makes the case that the precautionary principle of international and EU laws, in contrast to the precautionary principle of some discussions in academic philosophy and research policy, is a sound principle that expresses an aspect of normal practical reasoning. The article spells this out precisely. Where there are plausible but uncertain dangers, we act as if the danger exists, even in the absence of a full justification for the belief in its existence: ‘Military commanders do not passively wait for full evidence of a suspected enemy attack before taking counter-measures’, a ‘safety engineer will close an elevator for maintenance based on rather weak indications that its cables have been damaged, rather than wait for incontrovertible evidence that this is the case.’ Similarly, in science policy, we may take a ‘bypass route’ from some indicative data—a suspicion of danger—directly to a policy, without subjecting the data to filtering through the corpus of accepted science. What kinds of suspicion can trigger this bypass? It must be more than a ‘mere possibility’, and it should have plausibility that is specific to that risk, in comparison to any ‘alternative postulations’. For example, despite a notoriously fraudulent study, there is no more data suggesting that the MMR vaccine causes autism than there is data suggesting that the MMR vaccine prevents autism, and so the precautionary principle is not invoked.

It is interesting to consider how such a principle plays in to very abstract threats or in to scientific practice taken at a very broad level. [2] recently argued that there is a systematic very small danger of a very large catastrophe in the practice of technological discovery in general. For any advance, there is some chance that it interacts with features of the global political order that leads to highly destructive outcomes. Does precaution arise at the level of scientific practice in general? How far do abstract theoretical possibilities count as data that might provide a suspicion sufficient to trigger precaution?

Whereas Hansson’s paper is concerned with how we should respond to imperfect knowledge about the dangers of technology to health or the environment, Philip J. Nickel’s paper is concerned with how we should respond to the disruptive effects of technology upon our understanding of our normative world. The uncertainty about our moral systems can arise when new technologies are put into practice. For example, in the biomedical sphere, long-running practical resolutions to difficult bioethical questions may become unworkable when it becomes possible to create or save lives in surprising new ways. One concrete example (not used by Nickel) is the way that the trimester framework for abortion policy has diminished in its authority as technology has improved the viability of the foetus. The question is should we construe the creation of the uncertainties that accompany disruptive innovations always as harms or setbacks in themselves? Or should we construe the creation of such uncertainties as non-harmful when they are a part of a broader package of progress? Nickel does not take a side, but sets out the issues with both positions. For those taking the first position (the harm account), it seems problematic to regret the passing of old and unwanted norms. The uncertainty that attaches to disruptive innovation in these cases ‘represents progress and improvement’. Furthermore, moral uncertainty can be an expression of the inherently desirable or virtuous attitude of a person’s serious consideration of their moral universe. One might say that the discomfort that accompanies moral uncertainty is no more to be regretted than the discomfort that accompanies any other difficult but valuable and rewarding enterprise. The second position (the qualified harm account) can be refined, then, as stating that moral uncertainty is not harmful where it is an element of moral progress, and it facilitates practical moral deliberation. For those taking this second position, only those innovations that are ‘for the best’ are non-harmful in respect of moral uncertainty, a matter that will be indeterminate at the time of innovation. There is a danger, nonetheless, on this second view, that one undervalues the setbacks involved in the deliberation.

The paper by Stuart Coles and me examines the way that value judgements enter in to one of the most well-established technology assessment processes, life cycle assessment (LCA). LCA is a formal system for assessing environmental impacts. The question of whether science is or should be value neutral is important and much discussed (e.g., [3, 4]). The examination of the ways that value choices enter into LCA is of special interest because the process is sometimes characterised as separate from or broader than science, but nonetheless in some sense objective. That is, it functions at the border of policy and science. The paper’s discussion takes place with reference to a particular project on how algal oils can make consumer and industrial products such as inks, cosmetics, and foodstuffs. Such products are currently manufactured with a petrochemical feedstock, and the hope is that, for reasons of sustainability, ways will be found to move away from petrochemical use for such products. The LCA within the project was on the whole not positive about the prospects for algal oils in this role; a motivation for the paper is to explore how far this result reveals an issue relating to LCA in general or the particular version of it deployed in the project. The paper sets out three areas in which value judgements might be made implicitly or explicitly in an LCA: (i) what precisely one is to assess; (ii) how to make comparisons between the objects of assessment; and (iii) how to respond to uncertainty. The paper concludes by setting out practical ways of embracing, rather than avoiding, the normative issues that technology assessment can raise.

The article by Mrinalini Kochupillai, Christoph Lütge and Franziska Poszler takes as its central target an account of how moral dilemmas should be dealt with by automated vehicles. According to one view, we can provide input in to the proper behaviour of automated vehicles in dilemma situations by drawing upon the results of representative mass surveys of people asking the proper course of action in the dilemmas concerned. Thus, Awad et al. [1] note general principles arising from a large survey data set, such as ‘sparing humans over animals, sparing more lives, and sparing young lives’. Supposedly, those programming machines that face dilemmas may legitimately program those principles in. Why would this be wrongheaded? Kochupillai and colleagues offer a series of objections. First, there are methodological issues. While revealing truths of some kind, there are reasons why survey data is not the last word on normative issues. For example, the context of a survey question is quite different from the context of one facing an urgent dilemma. Second, a series of legal-normative and ethical considerations tells against the implementation of the policies that appear to be proposed by the survey data. For example, it is argued that programming machines preferring, where deaths must occur, the deaths of older people than of younger people would violate legal and ethical norms such as the right to life and the equal moral worth of individuals. If the article’s argument is successful, it is interesting not only in its implications for automated vehicles and indeed machine learning or artificial intelligence more generally, but also for the way that we assess technology policy: very basic legal standards provide a fruitful route of normative assessment.

In reading these papers together, one takes away a strong sense of how disorienting technological change can be. Pragmatic resolutions to controversies become untenable, and proposed governance procedures for new technologies can sit in tension with well-established principles. One can expect that arguments will continue about the proper ways to respond to the accompanying uncertainties about the impact of emerging technologies upon our social world. Further, we see here different perspectives on how best to access the norms at work: variously encouraging focus upon stakeholder consultation and engagement, mass intuition harvesting, jurisprudential norms, scientists’ ethical expertise, or a pragmatic scepticism about our ability to access knowledge of our progress.