Technologies lower constraints and expand affordances. By doing so, they continuously redesign the feasibility space of the agents who enjoy them. The more empowering or enabling technologies become, the more likely they are to change the nature and scope of the risks that they may bring, both in terms of undesirable outcomes (possible damages or losses) and in terms of missed desirable outcomes (potential benefits or opportunities). As a consequence, technologies, by their very nature, tend to redesign the corresponding space of risks in which agents operate and interact. Like a string of paper dolls, it seems that technologies cannot shape actual constraints and affordances without also shaping the corresponding risks, both positive and negative.

A risk-free technology is therefore an oxymoron, as recent disasters and crises affecting the energy industry have painfully reminded us. Nevertheless, the intrinsically risky nature of technologies should not be reason for despair, for technologies can also reduce the space of risks and make it more manageable, and this is ground for some cautious optimism. Let me explain.

Through time, the rather simple dialectics of constraints, affordances, and risks transforms the set of risk takers into a subset of the much larger set of risk runners. The identification of a chariot driver as the only taker and runner of the same relevant risks seems already implausible, to the extent that, even in ancient Rome, there were laws regulating traffic and driving behaviours. Such identification becomes inconceivable once we consider a taxi driver. Now, in politically organised societies, the risk runners, or stakeholders, seek to protect themselves from the consequences of the actions of the risk takers through systems of regulations about standards, protocols, licences, controls, deployment conditions, proper use, safety measures, and so forth. Once such regulations become formalised into legislation, risk management can rely on legal systems and safety technologies (Smith 2011) in order to establish constraints and provide affordances in the development or use of a technology. They both work in the same direction and both can be seen as part of the solution. Together, legal systems and safety technologies constitute what I would like to call a metatechnology, that is, a technology that operates on and regulates other technologies.

The idea that some technologies might be used to implement “the safe, effective, and economical use” of some other technologies is not new. It was already theorised in (Bross 1987), following the Three Mile Island accident (1979). Indeed, one could argue that the first governor designed by Watt in 1788 was already a classic example of a metatechnology. What I have in mind here, however, is something slightly different (Floridi 2007). It is the view that a metatechnology should be understood as comprising not only the relevant technologies that deal with the appropriate technologies but also the rules, conventions, laws, and in general the socio-political conditions that regulate technological R&D and the following use or application of technologies. It is this broad concept of metatechnology that provides the aforementioned ground for some cautious optimism, in the following sense.

Consider potential negative risks first, the missed desirable outcomes of a technology. Much of the information economy has been made possible by ICT working as a metatechnology, enabling agents to identify benefits and exploit opportunities (Van den Hoven, in (Floridi 2010)). Likewise, as a metatechnology, legislation can deal with negative risks by offering incentives to agents to become potential risk takers. Germany is a good example. Because of its solar subsidy, Germany’s solar energy market is by far the biggest in the world, with a total installed capacity in 2011 equivalent to 40% of the world’s total. Admittedly, known risks are a bit like pain: they might be unwelcome, but they often signal the presence of some important trouble. So incentives, like painkillers, should be dispensed with care, as they might have serious counter effects, in terms of hiding old problems, delaying their solutions, or causing new ones. In the case of solar panels, Germany recently amended its feed-in tariff law in order to slowdown the exponential growth in installations because of its financial costs and distorting effects on the energy market. Likewise, from an environmental perspective, it is important to recall that five of the top ten solar panel makers in the world are from China, which dominated 40% of the world’s market in 2010. Unfortunately, Chinese industry has been repeatedly criticised for its very poor record on working conditions, human rights, and environmental concerns. A similar analysis could be offered for the sustainability of corn-based ethanol and coal-to-liquid synthetic fuels. Nevertheless, all this should not make us despair. If carefully handed, incentives may turn into investments and build the essential bridge that will be required if the energy industry is to transit from polluting to cleaner and renewable sources. That the path might be narrow does not mean that it is not worth pursuing. That it might be the only path merely reinforces the urgency of taking the right steps.

Consider next the potential undesirable outcomes of a technology. A metatechnological legislative approach is often at its best not when it provides affordances by offering incentives to counterbalance negative risks, but when it imposes constraints by enforcing disincentives to cope with positive risks, that is, when it focuses on the don’ts rather than the dos. In this case, the path is much broader and is represented by four main strategies: prevention, limitation, repair, and compensation.

Once again, no metatechnological strategy is infallible. Prevention may be too radical when it imposes a complete ban on a particular technology. For example, in the 1970s, Italy, one of the earliest adopters of nuclear energy, was the third largest producer in the world, but a referendum in 1987, immediately after the Chernobyl disaster (1986), resulted in the phasing out of all existing plants, causing an increasing dependence on energy imports and subsequent electricity prices much higher than the EU average. Unsurprisingly, the country is now reconsidering the possibility of building nuclear plants (although recent events in Japan have halted the political process). After all, Italy buys electricity from its neighbouring France, which produces almost 80% of its electric power through its nuclear network.

Relative prevention—understood as measures that allow a technology to develop but seek to prevent, or at least limit, the realisation of its risks, like anti-lock braking system—and, when this fails, limitation and repair of unwanted outcomes actually occurred, are metatechnological strategies that allow for degrees, so they may be more carefully tuned. Yet, this means that the more flexible such strategies are, the more they tend to rely on the correct coordination between relevant legislation and safety technological solutions. Both may still fail. For example, on 11 March 2011, following the Tōhoku earthquake and tsunami, the 16 nuclear plants in Japan affected by the earthquake were switched off within 2 min (including Fukushima), with cooling procedures initiating immediately and correctly. It was evidence of remarkable resilience that almost all nuclear plants could survive undamaged such a natural disaster of Biblical proportions. Following current legislation, the Fukushima plant was protected by a seawall designed to withstand a 5.7 m. (19 ft.) high wave, but the wave that struck it was about 14 m. (46 ft.) high and easily flooded the generator building. The ensuing problems and hazards were a consequence of a failure of the safety as much as of the legal metatechnological systems. In all these cases, however, the crucial point is to realise that increasingly unwanted outcomes require ever more advanced, forward-looking, and sophisticated kinds of metatechnologies (legislation and safety technologies), not less. This is where good design can make a very significant difference, both by decreasing the chances that unwanted outcomes might occur and by incorporating high degrees of resilience that can make the effects of such unwanted outcomes negligible when they do occur. The second worst thing, after a system’s failure, is a system incapable of coping with it successfully.

Compensation, the fourth metatechnological strategy, may also be unsuccessful, if badly designed. Compensations are not strategies to cope with the unwanted outcomes of a technology, in the same sense in which home insurance is not a way of coping with fire hazards. They should not be seen as deterrents either. If that is the goal, then legislation should probably ban the technology in question or set up a fine and a points system in which the relevant international authority, for example the International Atomic Energy Agency, is empowered to issues fines and demerits to agents for the losses or damages caused by their technological mistakes (compare this to some countries’ legal systems whereby a driver’s licencing authority issues demerits to drivers convicted for road traffic offences). Compensations are a means to manage the costs before (insurance premium) or after (repayment) a technology fails. They too may be less than effective, if not carefully calibrated. In 1990, after the Exxon Valdez spill in Alaska, the Oil Pollution Act was passed by the US Congress in order to make “holders of leases or permits for offshore facilities […] liable for up to $75 million per spill, plus removal costs”. This might have seemed a reasonable compensation cap at the time, but—although the exact damages caused by the Deepwater Horizon drilling rig explosion on the 20 April 20, 2010 remain unknown—it is clear that the costs of private economic and public natural-resource claims far exceed the $75 million cap on existing oil spill legislation. Recall that even the costs for damaging natural resources and for private parties’ claims in the much smaller case of the Exxon Valdez reached $2.3 billion. So, what is required is to reconsider the design of such legislation, and this is why the White House, correctly, is currently seeking to raise the cap, while BP pledged to waive it, probably in an attempt to avoid highest payments due to violations of safety regulations.

Clearly, there are no risk-free technologies, not even in an Amish-style approach to life, because technologies push the limit of the feasible and this, inevitably, comes at some risks. The only technologies completely safe are those never built. And there are no cost-free solutions for the management of technological risks either. But, it is equally clear that there are metatechnological ways of dealing successfully with the risks implicit in any technology. We should invest more and better in our metatechnologies because the future will only be more technologically complex and challenging than the past. Of course, by coping with technologies’ positive and negative risks, both legislation and safety technologies may still run into positive and negative risks of their own. But, there is no problem of a regressus ad infinitum here. For handling metatechnological risks is no longer a technological issue, but an ethical one. What to privilege, how to find and allocate limited resources, and which risks run by whom might be deemed acceptable by whom in view of whose advantages: these and similar questions do not have uncontroversial answers. They are open problems that require informed, reasonable and tolerant debate, and an open mind—a philosophical attitude, in other words.

1 About the Issue

This issue is published less than 4 months after a calamitous earthquake generated an immense tsunami that struck Japan on 11 March 2011, causing the worst natural disaster in the history of that country, with immense damages, huge human suffering, and the tragic loss of thousands of lives. As is very well known, the tsunami damaged the Fukushima nuclear reactor, which is still not entirely under control at the time of writing. The risks related to nuclear contamination and the ensuing emergency have caused a wave of international reactions. Some might be discounted as demagogic, hysterical, or merely uninformed, but it is undeniable that an international, well-informed, and reasonable debate on energy policies in general, and on the future of nuclear energy in particular, is much needed. Nuclear energy is very expensive and might be significantly hazardous for a very long time. Yet, it is not riskier than carbon-fossil energy. So far, the Fukushima reactor has not caused a disaster even vaguely comparable to the one caused by the Deepwater Horizon oil spill, in terms of either human, environmental, or financial costs. Likewise, it is worth recalling that, every year, thousands of workers die in China’s coal mines. Nuclear energy is also much cleaner and global warming may make it an essential option, despite its high costs, as long as natural gas, renewable sources, and better technologies do not enable us to replace it. What is clearly a matter of political and scientific discussion, and therefore a metatechnological question, is whether, where, and how nuclear plants might be built. The goal of this issue is to contribute to such a debate.

The two research papers on causality, although they do not directly deal with nuclear energy, indicate that aetiological analyses are far from being simple, uncontroversial, or devoid of consequences. Phyllis McKay Illari (Why Theories of Causality Need Production: An Information Transmission Account) analyses the role of production in causal inferences and defends an informational account to support it. George Darby and Jon Williamson (Imaging Technology and the Philosophy of Causality) further support a mechanism-oriented understanding of causality and the thesis that, “at least in the health sciences, to establish the claim that C is a cause of E, one normally needs evidence of an underlying mechanism linking C and E as well as evidence that C makes a difference to E”. It would be easy to extend such analyses to the energy industry. Any change in the causal interpretation of some technological events modifies the nature and allocation of moral and legal responsibilities as well.

The two research papers on technological effects concentrate on design and prediction issues. Neelke Doorn and Sven Ove Hansson (Should Probabilistic Design Replace Safety Factors?) evaluate the trade-off between two ways of handling safety issues at the design stage: the safety factors or margins approach and the probabilistic risk assessment approach, concluding that both are necessary and may be complementary. Sven Ove Hansson (Coping with the Unpredictable Effects of Future Technologies) argues that “methods such as technology assessment and risk analysis have failed to predict the effects of technological choices” and therefore that we need to switch to an approach based on the evaluation of scenarios, what he calls “alternative future developments”.

The four papers lead to a research paper by BehnamTaebi (The Morally Desirable Option for Nuclear Power Production) on the ethics of nuclear power production and the problem of intergenerational justice. This concerns the potential tension between the duty to promote, or at least protect, the well-being of future generations by exploiting our energy resources as well as we can—which would prima facie support the development of nuclear energy—and the duty not to put at risk those same generations, which speaks against the development of nuclear energy systems that might represent a long-term burden.

Finally, the issue hosts two invited commentaries. One, by Wade Allison (We should stop running away from radiation), deals with the nature and implications of nuclear radiations, and suggests a rather bold thesis, namely that we should learn to live with the risks implicit in nuclear energy. The other, by Sabine Roeser (Nuclear energy, risk and emotions), addresses the emotional and heated nature of debates on nuclear energy and argues that emotions can be a source of practical rationality insofar as they can help to grasp the moral aspects of risky technologies. Therefore, they should not be discounted as mere expressions of irrationality.

The energy problems we are currently facing are not going to disappear. If anything, they are being exacerbated by the industrialization of an increasing number of countries, the rising living standards of their populations, and by ever more pressing issues related to global warming. We should address them now and decisively, from a metatechnological perspective, if we wish to tackle them before they become unmanageable or, even worst, irreversible. And we should probably be ready to make some sacrifices in terms of consumptions and costs, if our ethical analyses of current and foreseeable metatechnological risks demand them. A better world might well be a more demanding one, both morally and economically.