Introduction

Things bite back, as Edward Tenner explained (1997). Especially technological things. When new technologies are introduced on the market, they rarely behave as hoped. Often they have unintended and unforeseen side effects. These side effects take different forms. Tenner calls effects that are the exact opposite of the intended ones “revenge effects”. For instance, one of the aims of introducing computers on the work floor was to reduce paperwork. In reality both the work and the paper increased vastly. The new computing technologies, which facilitate easy storage of data and easy printing of documents, interacted with the working habits of the majority of people who like to read from paper. As Tenner’s extensive research shows, the result was that significantly more paper was used, and bureaucracy increased because of the many additional ways to collect, store and rework data (Tenner 1997). Another well-known example of a revenge effect is the energy saving light bulbs of the 1990s. Designed to decrease energy consumption, the technology achieved the opposite (Achterhuis 1998; Weegink 1996). Such light bulbs are expensive to buy, but cheap to use. Consequently, many people started to use these bulbs for lighting places that used to be dark (like gardens and corridors) and so the energy consumption increased.

In these two examples the “revenge effect” occurred because the new technology led to unforeseen changes in behavior in users. In this way, Tenner’s analysis draws our attention to the social role of technologies. Technologies affect our values, our standards, our expectations, our goals, our hopes, our routines, and so on. Of course, this social role of technology does not by definition result in revenge effects, nor is this role always negative or unintended. Sometimes the social role is negative, but without thwarting the intended goal of the technology. An automatic door groom makes it harder to pass through the door with a wheelchair and thus may have a discriminating effect, but it does do the work it was intended to do: closing the door (Latour 1992). In other cases the unintended consequences are ethically neutral or their desirability is open for debate. For instance, a remote control is not simply a means to switch channels. It changes the ends of the television watcher as it invites a different way of watching television. Many hours are now spent zapping through the channels, hoping for news items or for some entertainment. Again, in other cases, technology’s social role has unintended but beneficial impacts, as in the case of cell phones causing adolescents to spend a smaller percentage of their allowance on smoking (Kaur 2002; Irvine 2003; Selian 2004). And often the social role is not unintended at all. For instance, designers usually have presumptions about the practices of the user (Akrich 1992). The artifacts they design therefore influence the behavior of the users. Consider, for instance, the height of the average kitchen top. As a result, they are just too low to provide a comfortable working position for many North European men. In this way, they reinforce the sexist idea that women belong in the kitchen. In this case, technology’s social role is inspired by traditional ideas on the division of roles between men and women.

To what extent engineers can be held accountable for the social roles of their artifacts is a complex question. Behavioral changes are rarely if ever caused by technology in a fully deterministic fashion. As a rule, words like ‘facilitate’, ‘provoke’ or ‘invite’ more adequately express the kind of causal relation between technology and behavior that constitutes technology’s social role. This means that the engineer is never solely accountable for the social role of her technology—that accountability is almost by definition shared by the users and other actors. Nor is it clear from a democratic point of view that we as a society want engineers to deliberately shape this social role. Is that not too big a power to leave in the hands of only a few people? But we leave these larger questions for another article and argue that engineers have at least a co-responsibility, and it is this co-responsibility on which we focus in this article.

We propose to expand the engineer’s responsibility to the morally relevant social roles of technologies. Scientists and engineers are already accepting much responsibility, for the technological and economical aspects of their work. Increasingly they also consider the environmental impacts of their technologies. We ask them to take a step further and now also consider the social role of their products.

Here, responsibility does not refer to liability issues or obligations, as is common in ethics. In the case of liability, being responsible refers to being the rightful target of responsive attitudes: you have done some right or wrong for which you ought to be praised or blamed. The terms “praise” and “blame” are commonly used retrospectively, after the actions have taken place and when it has become clear what the consequences are, while we are more interested in the question of how engineers could try to prevent undesired consequences. Obligations stress that you are being responsible for something in the sense that it is your duty to do certain things. Such obligations can be assigned prospectively, but only if it is clear what a responsible action is. This is often unclear when discussing the future social role of new technologies or new uses for technologies, and so duties cannot be defined.

So taking a forward looking responsibility here means exploring what the social role of a technology might be. But how can engineers take such a responsibility? This article aims to support engineers in taking a prospective responsibility for the future social roles of their technologies. Not in the sense of liability (blame or praise after the act), but in the sense of carrying out a reflective analysis with “explicit consideration of ethical issues” (Mitcham 1997). Taking responsibility in this proactive sense means recognizing that your actions can make a future difference (no matter how local), making the effort to find out what is a good thing to do and acting according to those findings.

Carl Mitcham argued that technology practitioners should have tools that are “sufficiently complex to include a diversity of non-standard technical factors” p. 275. In our words: they need a framework for exploring the future social role of technologies. Of course, we cannot offer them a crystal ball. Part of the social role of new technologies will always emerge unexpectedly. But we are not condemned to grope in the dark, either. In this article we provide responsible engineers with an anticipatory matrix that helps to explore in advance how emerging technologies might plausibly affect the reasons behind people’s (moral) actions. Again, this matrix is not meant to cover all responsibilities of engineers: it is meant to enable engineers to take a forward responsibility for the possible social role of technologies, in addition to their other responsibilities such as ensuring environmental, safety and economical issues.

The Technological Mediation of Morality: A Matrix

Bruno Latour has pointed out that technologies can “authorize, allow, afford, encourage, permit, suggest, influence, block, render possible, forbid, and so on” human action (Latour 2005; p. 72). Based on his work, a host of case studies have been carried out demonstrating how technologies indeed mediate human actions. Several authors have developed phenomenological (Idhe 1993; Verbeek 2006), sociological (Latour 1992, 2005) or pragmatist (Keulartz et al. 2002) approaches for understanding how technologies do this. We build on this work.

But these approaches need further specification. Often the social role of technologies is described as if we were dealing with the impact of a technical object on a human object. However, an object cannot take moral responsibility: they only perform actions in the sense of reactions. Humans are distinct from objects because they have reasons for their actions and they can reflect on these reasons. Technologies affect our actions not just by altering the course of action (like billiard balls do to each other) but by mediating our reasons or motives to act in a particular way (Waelbers forthcoming).

How exactly do technologies mediate our reasons for actions? A fruitful way to explore this mediation starts by distinguishing three types of reasons on which people base their practical judgments: What “is” the situation? What “can” one do? And what “ought” one to do, given this situation and these possibilities? (Waelbers forthcoming) All three types of reasons can be mediated by technology. For instance, our factual beliefs are closely related to how we perceive the world. Don Ihde explains how technologies mediate such perceptions (Idhe 1993). New technologies (for instance the microscope) changed our observations (the microperceptions) which then caused our factual ideas to alter (for instance our ideas about hygiene). Technologies disclose reality in new ways. But it should be realized that technologies not only make new aspects of reality visible, but sometimes also hide parts of reality. Car drivers, for instance, miss out on a whole lot of reality that is accessible to the cyclist. Secondly, technologies mediate what our practical options are, thus affecting our answers to the question ‘what can one do’. In fact, technology’s overarching promise is to create new practical options, thus enlarging our freedom. But again, technologies not only create options, they also remove or modify existing ones. Cars may offer the opportunity to travel from A to B, but they make it difficult to enjoy peace and quiet. A third category of practical reasons are based on what we believe we ought to do. We refrain from or pursue certain actions because they conform or conflict with our values. Our ideas on what we ought to do can also be mediated by technologies. For example, our ideas on the desirable social roles of women are co-shaped by innovations like the contraceptive pill, condoms, the washing machine and the microwave. Technologies call forth new goals and duties, or help to make them obsolete.

In short: technologies mediate what we believe to be the case, what we believe to be possible and what we believe to be desirable. And by mediating these beliefs, technology mediates the actions based upon these beliefs. It is not clear that the technologist should try to anticipate such mediations, as these might be trivial or non-consequential. But when our “is-, can-, and ought-beliefs” are technologically mediated in a morally relevant manner, it is important that engineers and scientists are enabled to take a prospective responsibility. But when is the technological mediation of our is-, can-, and ought-beliefs morally relevant? This question can be answered from three different perspectives: from the perspective of the stakeholders, from the perspective of the consequences, and from the perspective of a good life (Swierstra 2010; Swierstra and Rip 2007).

First, the interests and rights of our fellow-beings should be taken seriously when deciding how to act.Footnote 1 The concept ‘stakeholder’ is used to mark out those parties affected by an actor’s practical choices. Of course, as the actors have rights and interests—a ‘stake’—too, they will often be stakeholders themselves as well, but this is not necessarily the case. Stakeholders have a ‘stake’ in our (in)actions, and a moral claim on us, e.g. to be treated fairly, to be helped, or to have an explanation for why we chose to do what we did. When deciding how to act morally, it is therefore always necessary to identify such stakeholders and their interests and rights. And if our perception of who the stakeholders are was to change, so would our moral judgment. For instance, when parents know that a toy is cheap because it is made in a factory that employs 8-year-olds who work 12 h a day, 7 days a week, they may be less inclined to buy it for the amusement of their own 8-year-old.

Secondly, acting morally implies trying to anticipate the consequences of our (non)actions, and to establish whether these are morally desirable (obligatory) or not. Realizing that our action does not have the intended consequences, commonly leads to changing our moral assessment of that action. Now that people know CO2 emissions are causing climate change, they are trying to decrease the emissions.

Finally, morality also pertains to the question of how to live a good life, even if in contemporary, pluralistic, liberal societies, this question has to a considerable extent been banned from the public domain (Swierstra 2002, 2009; Waelbers and Briggle 2010). However, insofar our aims central to what we consider essential to human flourishing change, our conception of the good does too (Swierstra 2010). Technologies typically promise to help realize our goals more efficiently, to satisfy our desires, to diminish suffering and pain, and so forth. But they also help define those goals, they create new desires, new forms of pain and suffering, and so forth (Jonas 1984).

The distinct types of reasons and moral perspectives allow us to formulate a general answer to the question of which technological mediations are morally relevant and thus of particularly interest to engineers aiming to expand their responsibility for the future social roles of their technologies. First they should ask how their products might affect established beliefs about is, can and ought, and then, in a second step, focus on those situations where the mediation of those beliefs effect changes in prevailing perceptions of stakeholders, consequences, or the good life, as these mediations pertain directly to moral judgment.

This is of course a complex endeavor. Therefore, we have constructed the following matrix (see Table 1) to help people enquire what the possible morally relevant, social role of the technologies might be. On the horizontal axis, we distinguished the three basic types of reasons that play a role in practical judgment, and on the vertical axis we distinguished the variables of moral judgment.

Table 1 Matrix for the technological mediation of morality

The upcoming subsections illustrate each box of this table. Note that for each point, technologies can simultaneously work to increase or decrease, expand or limit, frustrate or support the aspects under investigation. Furthermore, as a stakeholder is defined as someone who suffers or enjoys the consequences of our (non)actions, or, vice versa, morally relevant consequences are defined in terms of whether they affect stakeholders or not, the first two rows of the matrix closely hang together and mirror each other.

Stakeholders

Behaving in a moral manner implies that one takes into account the consequences of one’s (non)actions for other parties, the stakeholders. The first row of our matrix helps explore how new technologies mediate the relation between the technology user and the stakeholders.

Ad 1a. Presence

We begin by asking how a technology affects the beliefs of the user concerning the factual world. Many technologies disclose the world to our senses, e.g. by making far removed stars or nearby nanoparticles visible. But as we are interested here in such disclosure only in as far as it is morally relevant, we ask in particular how technology can affect beliefs about the presence (or absence) of stakeholders. Technology can sometimes make actors more aware of stakeholders. For example: Verbeek explained how ultrasound technologies changed the moral status of the fetus and the experience of pregnancy for both parents and grandparents by making the fetus visible (Verbeek 2008). Another example is the television: this technology has made the citizens of affluent Western societies acutely aware of the poverty of many people in developing countries (Boltanski 1993). By presenting stakeholders, technology can make users aware of their presence. The awareness of stakeholders’ presence is morally relevant, as it is a precondition for taking their interests and rights into account.

But technologies can also decrease our moral involvement with other stakeholders by making users less aware of their presence. Günther Anders (Anders 1980/1956) described how technologies affect our empathy in a macabre way when he discussed the bombing of Hiroshima. To drop a bomb, the pilot only has to press a button. He neither has to face the victims nor the consequences of his action. Without hearing or seeing the impact, he is able to kill millions of people, while, as Anders claims, listening to classical music. This is a completely different experience than killing someone from close by (van Dijk 2000).

Ad 1b. Empowerment

Typically technology promises to empower the user to do things previously beyond her or his power. When trying to anticipate how new technologies might affect the beliefs of actors and (other) stakeholders, it is fruitful to analyze these shifts in (relative) power as they determine to a large extent what actors believe to be possible. This is morally relevant when these newfound powers affect the moral sensitivity of the agent to the fate of the stakeholders in a positive or negative way. For instance, being aware of the presence of stakeholders is a necessary but not a sufficient condition to propel people into action. A further precondition is that one believes that one is in a position to do something positive for that stakeholder. This is where technology plays a major role. It can help establish the belief that, yes, one can do something for others. The existence of the telephone or Internet, for instance, enables one to wire money to those in need when they are far removed from us. The availability of medical instruments can cause us to no longer accept suffering and death, but to do something about it.

A specific case is when the technology user herself is the main stakeholder. In that case technology does not empower her to help others, but to help herself. Another word for such self-help is emancipation. For example: now a wealth of information on medical issues can be found on the Internet, many patients develop clear ideas about how to further their legitimate self-interest. (Van Rijen 2005). This immediately affects the power balance between doctor and patient, as it decreases the autonomy of the first and increases the autonomy of the latter.

But technology can also increase our possibilities in such a way that we become less concerned with what our actions entail for others. When listening to an mp3 player, people are less inclined to make small talk to others using public transport and behave more “autistic”.

Ad 1c. Rights

By making a technology user aware of the presence of stakeholders, and by opening up practical avenues to take the interests and rights of those stakeholders into account, technology can, and often does, motivate the user to act on behalf of those stakeholders. But technology not only discloses stakeholders’ rights to the user, it can also help to create new stakeholder rights. These rights then influence the actor’s belief in what ought to be done and what not. Which rights are and which are not acknowledged may be mediated by technologies because new options arise or existing options become less attractive. We have already mentioned that the acknowledgement of women’s rights was co-shaped by developments in birth control technologies. Another example is provided by the Dutch social security system: people who need social security for a substantial period of time now have the legal and moral right to receive regular subsidies to buy a washing machine, television, computer or refrigerator. Such devices have now come to be considered as essential for people to function well in society.

But with the acknowledgement of new rights (such as the right to certain information or the right to treatment), other rights may be contested. How long will it take before citizens lose their right to be informed by important institutions like government agencies or assurance companies by means of information written on paper, rather than through the Internet? When the first genetic test for Huntington disease was under development, people who were at risk of developing this neurological disease were asked whether they wanted to be tested when the diagnostics became available (Burgh 1997; Tibben et al. 1997). Many of the respondents worried that testing might also reveal genetic information to family members who claimed the right “not to know”. Others argued that the right “to know” was more important since it enabled them to adopt a lifestyle that fitted their prospects. These rights to know and not to know did not exist before the introduction of the test. Previously, due to the lack of technological means, all members of the risk group were necessarily condemned to “not to know”.

Consequences

Morality has to do with intentional behavior, this means that it is goal directed and that consequences matter. The second row of our matrix helps explore how new technologies mediate the relation between the technology user and the consequences of her action.

Ad 2a. Anticipatory knowledge

The introduction of a new technology can change the factual beliefs of the users, as the consequences of their actions may become illuminated or blurred from view by the employment of these technologies. Many technologies make us more aware of the consequences of our actions, for instance by measuring the impacts (such as energy meters) or by enabling us to observe the impacts (such as microscopes).

But many technologies have the opposite effect: they change our factual beliefs by making us less aware of the consequences of our actions. More particularly: many modern technologies conceal their environmental consequences from our sight. They do this for instance by taking over certain tasks we would previously perform ourselves. As we no longer chop our own wood, we do not witness the deforesting effect of our wood consumption. The central heating systems of houses and offices do not reveal what the effects are on the landscapes. Our sewer systems hide the water pollution our household causes from view since we no longer have to clean out our cesspits and dunghills.

Ad 2b. Practical Affordances

The link between intended outcomes and realized consequences is uncertain at best, as we can learn from any deontologist critiquing consequentialist forms of ethics. Often we lack the necessary means of control to ensure that what we intend to happen will happen. However, technologies can and often do increase our possibilities of influencing those outcomes. The promise to create new practical affordances underlies almost all technological expectations, and often for good reasons. Contraception technologies, for instance, increased our possibilities to influence the consequences of our sexual actions. And the cell phone enables us to reach friends and colleagues wherever they may happen to find themselves. In this way, technology helps to establish the belief that it is possible to intervene successfully in our world.

However, creating new possibilities is not all that technologies do. As a rule, after a new artifact has been introduced into society, we find out that it now rules out certain practical options that were previously available. A woman who does not want to have intercourse with her partner, for example, can no longer use the ‘threat’ of becoming pregnant. Or someone who may not want to be reachable all the time and everywhere, may soon find herself to be a social outcast since so many of our social interactions are now coordinated by mobile phone.

Ad 2c. Responsibilities

Technologies can increase or decrease both our knowledge of our actions’ consequences as well as our ability to influence those consequences. These changes directly translate into our moral responsibilities (de Vries 1989). In our society, a doctor who knows how to cure a patient and is in the position to do so (e.g. by having the necessary instruments or medication available), is under the prima facie obligation to do so. The more powerful technology makes us, the greater our responsibilities. Hume famously told his readers that ‘Ought implies Can’. But the philosophy of technology teaches us that the reverse is also often true: ‘Can implies Ought’. With new powers come new responsibilities. In this vein, technologically mediated knowledge of, and control over, the consequences of our actions, affects our beliefs about what we ought to do, and what not.

What receives less attention, however, is that technologies can also work to reduce our responsibilities. Firstly, technology can make the consequences of our actions harder to know, and it is difficult to take responsibility for consequences you do not know about. For example: in modern food production, technology has acquired such a dominant role that food has to a large extent become black-boxed. Consumers hardly know where their food comes from any more, or how it is produced and processed. For this reason, technology has made it much harder for consumers to consume “responsibly” as they literally do not know what they are eating (Waelbers et al. 2004). Another example: technology tends to make processes so complex and multi-layered, involving so many different actors, that the possibilities of influencing this system are greatly reduced. This makes it difficult, if not impossible, for individual actors to accept moral responsibility. Furthermore, we delegate an increasing number of tasks and duties to technologies (Waelbers 2009). If these technologies fail, people often argue that this is a technological and not a moral problem (Waelbers 2002). We have for instance delegated the delivery of our mail to computers, servers and software such as Microsoft Outlook. If a message fails to arrive, people tend to blame it on the technology.Footnote 2 So, in all these cases the actors end up with the belief that because things are out of their vision and/or control, they are under no moral obligation to do something about them.

Good Life

The third row of the matrix addresses the influence of technologies on our perceptions, possibilities and assessment of the good life. Often such shifts result from a combination of multiple technologies, but also individual technologies can have a substantial effect.

Ad 3a. Contingency

New technologies affect established ideas about what is good to be and good to do. First, they can do this by altering our perceptions of the place of humans within the world. A common example to illustrate this point is the compass. This technological device (together with many other sailing technologies) has contributed to the change in the Europeans′ understanding of their position in the world and of (their own and other people’s) culture. More generally, technology fills us with pride, and the Promethean dream helps us believe that not even the sky is the limit. Technology helps to undermine the belief that we are an integral part of a Normative Nature, where everything and everyone has his or her own role to play. We marvel that we can do and create everything we want, given sufficient resources and time. Documentaries such as the Discovery Channel’s “Mega-Structures” series testify to the widely spread idea that we are masters of both nature and ourselves. There is no pre-established order to obey. Technologies help establish the (essentially) humanist belief that reality is contingent and open to revision.

But this coin has a flipside. Another classic invention, the telescope, has fundamentally altered our understanding of our place in the universe too, but in this case the technology did not lead to pride, but rather to modesty. The telescope firmly removed the earth—and us humans with it—from the centre of the universe. And up until the present, technological developments continue to mediate our perception of who we are. It is argued that neuro-science and neuro-technologies show us that many of our actions are not “autonomous” in the enlightenment sense of the word (Kalis et al. 2008). Many decisions are taken unconsciously (Broks 1997; Kalis et al. 2008). These technological developments are diminishing our status and capacities as autonomous and moral persons: we are not the center of the universe, created in the image of God, nor are we able to decide freely how to live our lives. However, even if these technologies lead to factual beliefs that hardly contribute to our sense of pride, they do not restore the previous concept of Nature as a Benign Order. They too impress upon us the fact that we live in a contingent universe.

Ad 3b. Freedom

A contingent world may have lost its sacral and ordered character, but it does open up opportunities for action. Technologies create and limit our options to live what we believe to be a good life. On the one hand, there have never been so many options to find friends, jobs and leisure activities that suit your interests and outlook on life. Society has become more fluid, more mobile due to cars, trains and airplanes. People who share the same interests can find and contact each other easily via the Internet, regardless of where they live. More fundamentally: with the increased opportunities to shape your life rather than simply obey the role that is connected to your given status in society, the dominant conception of the good life has moved away from ‘obeisance’ towards ‘autonomy’ and an activist stance.

On the other hand, the increasing pressure for everyone to use Internet, to uphold several e-mail accounts and to own a mobile phone severely limits people’s freedom. If you want to participate in society, it is increasingly obligatory to embrace these technologies and, with it, also many superfluous contacts and a rushed lifestyle.

Ad 3c. Flourishing

By altering our perceptions and practical options, technologies also co-shape what we believe to be virtuous. Foucault’s description of disciplining in schools made clear that the classroom design and the chairs and desks enforce a certain physical pose for the students (Foucault 1975), aimed at encouraging a certain moral pose or attitude. The bodily position is closely linked to the attitude required for learning. A few decennia ago, classrooms were designed in such a way that the students were forced to sit up and look at the lecturer or teacher. The rooms and furniture did not stimulate communication, but listening. There was little room to move and only small desks were provided in order to make notes. This design was closely connected to what was then believed to be good education, and the design co-shaped the students’ attitudes.

Nowadays, complex, multimedia rooms are developed for education in which large, wheeled tables and luxurious office chairs are placed. These surroundings are not only more comfortable, they also stimulate a pro-active learning attitude. Students are no longer supposed to sit quietly and listen; they have to work on projects, engage in debates and communicate with others. The virtuous present-day student is unique, pro-active, assertive, communicative and collaborative, instead of observational, timid, obedient, and solitary. The classrooms are designed to co-shape these virtues of the students. In this case, there is a close relation between our virtues and actions. The furniture determines the student’s ability to adopt a bodily posture (a physical condition) that stimulates the attitude they occupy in relation to each other and to the teacher. As a result, students are encouraged to act in a way that is considered virtuous within the educational practice.

Taking Responsibility for Designing the Social Role of Technologies

In this section, we explain how the nine quadrants of the matrix can help actors to explore the future mediation of the reasons for action of technologies. This is important in order to be able to take a proactive responsibility for this mediating role. Here, it becomes clear why we choose to focus on the reasons behind actions rather than on the actions themselves: it has the important advantage that it leaves room for the agency of those affected by the technologies, rather than degrading them to be passive ‘victims’ of the agency of technologies. Focusing on practical reasons empowers people because they can play an active role in evaluating these reasons. This evaluation is what we call practical reasoning or reflection. We understand reasons and practical reasoning in a MacIntyrean sense (MacIntyre 1999). Alasdair MacIntyre distinguishes between having reasons and the activity of practical reasoning. Although we are not aware of all our reasons for action, we can work to take responsibility for our actions by using our faculty of practical reasoning. In daily life, people can evaluate the reasons that are biologically or socially given by standing back from them and thinking critically: “Am I going to eat the last piece of pizza, or will I be wise and eat some fruit instead?” Similarly, even when the reasons given by our biological condition and social surroundings are technologically mediated, we can still apply our faculty of practical reasoning to reflect on the desirability of our actions (Waelbers forthcoming). For instance, even though communication and IT technologies provide us with fewer reasons to actually meet with clients and colleagues, we can still ask ourselves whether it would not be better in particular cases to meet these people in person.

In Table 2, we listed examples of questions that can be asked when discussing the nine forms of mediation of reasons for actions. Of course, not all questions are relevant for all technologies: it is a bit odd to question how a new espresso machine alters people′s pride to be human.

Table 2 Questions to ask with the help of the matrix

Consider for instance the question of how the Google PowerMeter, launched in 2010, might mediate the reasons for actions of its users. The Google PowerMeter is an online program that monitors the energy consumption of people who voluntarily subscribe to this free service. Google describes its PowerMeterFootnote 3 as “a free energy monitoring tool that helps you save energy and money. Using energy information provided by utility smart meters and energy monitoring devices, Google PowerMeter enables you to view your home’s energy consumption from anywhere online.” After you install an electricity meter in your house that is connected with the Internet, the Google software collects the required data, which is represented in a graph. The software has six functionalities:

  1. 1.

    Track energy over time: a graph depicts how much energy the member has used by the day, week or month

  2. 2.

    ‘Always on’ power: part of the graph shows power that is always on (standby devices)

  3. 3.

    Predicting costs: estimation of the annual energy bill

  4. 4.

    Customized feedback: members can alter the cost per kWh to see the impact of changes in energy prices, receive weekly emails, and share the usage with family and friends

  5. 5.

    Budget tracker: members can set a personal energy saving goal and track their progress

  6. 6.

    Join the community: sharing experiences and tips with other users

How can the matrix help to enquire what the likely mediating role of this technology might be?

Consider the first row of questions that concern the stakeholders. A quick brainstorm on these questions reveals that the Google PowerMeter brings otherwise invisible stakeholders to the fore, to make their presence felt. Friends and family can see each other’s consumption levels, and become recognized as people who have a stake (an interest) in keeping consumption down (1a). By customizing the Google PowerMeter, friends and family cannot only see but also respond to your power consumption. They are empowered as stakeholders as they can comment on your choices and way of life (1b). It is likely that such a development can result in the social norm to consume no more than the average household of your friends (1c). Furthermore, when people can observe each other’s energy consumption, questions about privacy rights are likely to arise (1c): “what were you doing at 02.00 am? You really had a peak in energy consumption”.

The questions on the consequences of the second row show that the mediation of the consequences is likely to be ambiguous. Google PowerMeter visualizes, and thus increases knowledge of, the financial consequences of people’s energy consumption (2a). In this way, it empowers the user to lower her energy bill (2b). But the information the Google PowerMeter supplies focuses mainly on the financial aspects. Consequently, it runs the risk of hiding the environmental impacts from view (2a). This would not be a problem if no revenge effects were to be expected. From experience with the energy saving light bulb (see introduction), we can conclude that an exclusive focus on the financial aspects might perversely increase the energy consumption, for example when it becomes clear that certain applications are relatively cheap and people can afford to increase the use (2b). And how will people behave when they observe that the energy becomes cheaper per kWh (2b)? Currently, many people do not know how much energy different devices use: they do not have the knowledge to be responsible. The PowerMeter makes these people responsible, but with a strong focus on the economic aspects, and not on the environmental issues (2c).

The third row of questions is about contingency, freedom and virtue. Energy bills have long been events that happened only once a year, Google argues. The Google PowerMeter aims to provide people with insight into their energy use on a daily basis. Energy bills do no longer happen to them, but they can be in control of the amount of the invoice (3a). Consequently, people’s freedom will increase since they will have more information available. But this is not the complete picture: other people can meddle in your energy decisions. Due to a novel form of—technologically enabled—social pressure, people might feel less free to act differently (3b). Last, people who use considerably less energy than their friends or family may find themselves tempted to start using more energy since the PowerMeter convinces them that they do not have to be “more Catholic than the Pope” (3c).

How can the programmers of Google take responsibility for the technological mediation? Taking responsibility for designing the mediating role of technologies in real life is not about liability: we do not want to discuss blame and shame. Neither are we planning to blame Google if the software turns out to also have unforeseen social impacts. But what we aim for is that actors apply their human capacity of moral imagination to explore what the mediating role of their technology might be, and to evaluate the question of whether this mediation contributes to human and environmental flourishing. Taking responsibility in the case of the Google PowerMeter would entail studying not only whether the information provided by the software is correct. Actors should also consider the desirability of the technological mediation.

What can people do to take this responsibility seriously? In some cases, it is relatively easy to adjust the design. For instance, the programmers of the Google PowerMeter could consider not only informing users in terms of dollars but also on expected CO2 reduction. But other issues ask for a more intersubjective approach, involving more stakeholders and other forms of expertise. In such cases, taking responsibility may involve for instance that the scientists and engineers ask for help and present their designs and ideas in a transparent and morally assessable manner to a discussion group.

Even if it is not sure what the social role of the emerging technology might be (as is the case with many emerging technologies), a forward looking responsibility should be taken in the sense that professionals should at least work to understand what the social role might be. In some cases, pilot studies, discussions and realistic moral imagination may provide us with some answers (Waelbers forthcoming). The above mentioned possible social impact of the Google PowerMeter can for instance be studied in pilots (in which controlled groups of potential users are testing the functions) and in simulation studies (for instance the effects of an unexpected inclination of the energy prices).

In many cases, parts of the social role will remain opaque, regardless of which studies are performed. But the simple fact that we will never be absolutely sure does not mean we should just stop thinking about how we can realize what we consider to be a desirable social role. The fact that we never know at the beginning what the results of a techno-scientific project will be, does not imply that scientists and engineers should refuse to take up the challenge. The same should hold for the social role of technologies: even though we can never be sure what the social role will be, that does not mean that we should not try to develop the new technologies in such a manner that the social role will be desirable. This might even be a prudent stance for scientists and engineers, since a desirable social role is likely to smooth the introduction of an invention in society.