1 Introduction

In relation to innovation and emerging technologies, it is recognised that technologies not only affect user behaviour, but also wider social norms and structures. For example, the birth control pill, made to enable control over pregnancy, implied a detachment of sexuality from pregnancy, leading to a change in the cultural norms surrounding sexuality. Some social consequences of this detachment were the sexual revolution in the 1960s and a growing acceptance of homosexuality (Mol 1997; Swierstra et al. 2010). However, wider consequences are usually somewhat in the dark when a technology is being developed, as stated in the well-known Collingridge dilemma: ‘The social consequences of a technology cannot be predicated early in the life of the technology. By the time undesirable consequences are discovered, however, the technology is often so much part of the whole economics and social fabric that its control is extremely difficult’ (Collingridge 1980, p. 11). Nonetheless, the last few years have seen attempts at early anticipation of possible social consequences of emergent technologies through broad and multidisciplinary mapping methods. This, in turn, opens up the possibility for a proactive approach to the design of technologies in order to shape behaviour in and consequences of the user context in a desired direction.

However, this kind of design approach has been criticised. Albrechtslund (2007) and Ihde (2008), for instance, claim that proactive approaches fail to consider that technologies can take on different stable meanings in different contexts; they are multistable.Footnote 1 The shaping of the separate stabilisations cannot be related to the designer’s intentions, but depends on how the technology is received in a given context. It is argued that not sufficiently recognising multistability underestimates other shaping influences on technologies and might even lead to the (false) belief that possible ethical problems arising from innovation are already taken care of through the design stage (Albrechtslund 2007).

However, although recognising the importance of this danger for proactive research, I am not convinced by the mentioned objection. For instance, it is not given what kind of relation that is presupposed between the design context and the user context in proactive approaches. How that relation is seen depends on the framing of the proactive work, more precisely how the dynamics between technology and society is conceptualised. Regarding proactive research as a manner in which society should attempt to control technology and its consequences certainly underestimates ‘outside influences’, but, as I shall argue, this is only one possible proactive approach. Being proactive is more complex than just assuming a (linear) relation between design and use contexts; this relation harbours several dimensions that will disclose whether ‘control’ indeed is the intention. It is perfectly possible to have a proactive approach without doing the designer fallacy.

Furthermore, it is not entirely clear what ‘multistability’ does in way of explaining why technology use cannot be controlled through design. Multistability is primarily a descriptive concept, turning our attention to the fact that technologies are not condemned to its designed meaning. However, in order to grasp a proper notion of the dynamics in the technology–society relation, we need to understand why technologies are multistable. This has double importance, since disclosing the dynamics behind multistability means disclosing how a piece of technology attain its meaning and stable function in a society, thereby providing a conceptual foundation for design methodology.

This article consists of three parts. First, I present the aims and criticism of proactive responsible design. I argue that it is primarily a specific, externalist conceptualisation of the technology–society relation that is susceptible to the criticism. Then, I elaborate on the externalist framework, showing how it frames two particular dimensions of the proactive assumption. And finally, I present some key concepts of an alternative, interdependent framework explicating why it is not susceptible to the same criticism as the externalist.

2 Responsible Design

2.1 Designing Behaviour

Technologies are often developed, redeveloped and modified because they are meant to serve specific functions. That is to say, technologies enter a given setting and users use them in a manner that retains these functions. Sometimes a technological device is a new competitor in an existing niche, such as the iPhone, Apple’s belated (2007) but successful contribution to the plethora of mobile phones. Other times, a technological device attempts to create or transform a niche, such as Apple’s own Apple II, which took the computer from the realm of the enthusiasts to the consumer market.Footnote 2 In both cases, and in the cases that fall between these two, the main method of the developers in order to ensure that the functions are retained is found in how the technologies are designed: A technology is designed to be used in a specific manner that is in accordance with the functions they are thought to serve.

Often this link between design and user behaviour is implicit and not reflected upon as such. Other times, it is more deliberate, such as when the image of a fly is etched onto the urinals of Schiphol airport in order to reduce men’s spillage. It is a simple trick, but it works because men tend to increase their precision when having a target to aim at (Thaler and Sunstein 2008, p. 4).Footnote 3 This example, of a technology that is designed to persuade users into behaving in a specific way, is only one of several strategies to shape behaviour through technology (Verbeek 2011, p. 153; Tromp, et al. 2011). Persuasion is a relatively gentle form of using design to shape the user context. A car that does not start unless the driver puts on the seatbelt, a short-lived attempt from the 1970s to enforce seatbelt use, is an example of a far more forceful impetus. Here, our behaviour is decided for us; the ignition–seatbelt technology has been delegated the task of guarding our behaviour. Having an ear-splitting alarm ringing when a car is started without the seatbelt being put on does not decide for us, but is still an example of a coercive technology; implementing a strong pull towards a certain form of seatbelt-wearing behaviour (Latour 1992). Yet, another strategy would be to seduce users into specific user patterns. The Metro in Seoul issues tickets that are also tickets in a lottery. The aim here is to use technology in such a way that the customers pay for their travels rather than dodging the fare (Tromp et al. 2011). A distinction that runs across these types of design strategies is between invitation and inhibition; some technologies invite specific forms of behaviour, other types inhibit specific forms (Verbeek 2006b). For all cases, despite the different strategies, we find a strong link between the design of a technology and its intended effect on users’ behaviour.

But design does not just affect user behaviour, technologies also have consequences, often unintended, which goes beyond their specific designed functions. For instance, the use of nanotechnology in textiles and other daily used items brings about environmental issues,Footnote 4 and new scanning techniques in healthcare bring about issues of radiation safety.Footnote 5 In a study of ethical issues in bio-nanotechnology, such ‘further’ effect of emerging technologies has been labelled ‘hard’ societal impacts (Boenink et al. 2010). These are quantifiable social consequences of technological innovation and are to varying degrees identified and dealt with as part of risk/benefit considerations. However, there are also ‘soft’ societal impacts that need to be recognised; how novel technologies ‘also impact on social practices and routines and the moral norms underlying such practices’ (Boenink, et al. 2010, p. 2). These are to a lesser degree anticipated, and there are no established methodologies to do so either.Footnote 6

Soft impacts are double problematic; it is very hard to anticipate them, and being values, their assessment might be different in the future because technology tends to impact on the foundation for our normative judgements (Swierstra et al. 2010). For instance, today, privacy is thought to be a very basic human right, and discussions about information and communications technology (ICT)-related developments, such as EU’s controversial Data Retention Directive,Footnote 7 are often framed by, on one hand, issues of privacy, meaning the right to own one’s private information, and on the other, issues of safety for a society. However, our understanding of the pivotal role of privacy here might be changing because of the penetrating presence of ICTs that facilitate and improve our lives in numerous ways, but also leave digital traces that enable wide-ranging collecting and storing of information about our whereabouts. If we come to accept some of the negative aspects in favour of some of the positive aspects, a value like privacy might eventually change its meaning considerably.Footnote 8 Nonetheless, the elusiveness of soft impacts is not meant to discourage value-related discussion during the development phase of technologies; rather it is to emphasise the importance of considering further normative and social consequences of innovation.

The emergence of ELSA (ethical, legal and social aspects) research is a testimony to the increasing awareness of further social impact of innovation. ELSA research first started as a way of dealing with complex social issues that arose from new knowledge and innovation in genomics and molecular biology, but the concept quickly gained a more wide-ranging sense (Zwart and Nelis 2009; Rip 2009). ‘ELSA’ is now used both as a name for research programmesFootnote 9 and as indicating a specific approach to technological development projects.Footnote 10 Other initiatives also aim at merging technical aspects with social aspects, such as the MVI programme of The Netherlands Organisation for Scientific Research.Footnote 11 This programme funds projects in nano- and biotechnology, ICT, neuroscience, agriculture and healthcare with ‘ELSA-like’ objectives: ‘The thematic programme contributes to responsible innovation by increasing the scope and depth of research into societal and ethical aspects of science and technology. It focuses on proactive research into the ethical and societal aspects of technological development projects.’Footnote 12 The use of ‘proactive’ emphasises that we should not be content with the realisation that values and norms are affected by innovation, but that we should take responsibility in shaping these values and norms.

2.2 The Positivist Problem

Although the mentioned approaches vary, both in terms of strategies (persuasive, inhibiting, etc.) and aims (constraining behaviour, shaping norms and values, etc.), they are connected through a specific assumption, namely that the user context of a technology can be intentionally shaped or influenced through the design context. In other words, the basic idea behind these approaches is that design strategies can be used in order to shape our behaviour in relation to a technology. Estimates on the extension of this, in terms of the actual shaping of user behaviour and how far this reaches into contextual and normative aspects differ, of course, but careful design, based on anticipations of how a device will impact on its user context, seems to be a shared idea.

In a recent article, Ihde warned against this assumption, calling it a designer fallacy. In literature studies, the idea that the meaning of a text can be traced to the author’s mind, that is, that the meaning lies in the intention of the text is called an intentional fallacy. A text might be taken to mean something quite different from its intended meaning (if there is one) for cultural reasons (not to mention if the text was written a long time ago), and an interpreted meaning can be as valid for the receiver group as the intended meaning (if known). Ihde argues that there is a similar fallacy in regarding the meaning of a technology to lie with the ‘author’ of a technology—the designer(s). According to Ihde, the notion that a piece of technology’s meaning is its designed meaning implies ‘some degree of material neutrality or plasticity in the object, over which the designer has control’ (Ihde 2008, p. 51). In other words, the call for proactive social responsible innovation seems to imply the long since discredited idea of technological instrumentalism, the idea that technological artefacts are no more than mere means to an end; things that transparently connect intentions and effects, without actively contributing to either of them (Heidegger 1977; Borgmann 1984, p. 177; Feenberg 1999, p. 1).

In a study of value-sensitive design (VSD), a methodology for normative design with a similar proactive attitude to the shared assumption identified above, Albrechtslund has identified a positivist problem (Albrechtslund 2007; cf. Friedman, et al. 2002). It is ‘positivist’ in presuming a direct correspondence between design and user ends (Ihde’s designer fallacy), and a ‘problem’ because for many cases of technology development such correspondence cannot be found.Footnote 13 Albrechtslund argues that not recognising this problem severs VSD from actually having profound effect on the value-related effects of innovation because the outside (of design) influences on use is underestimated. This is unfortunate because it “might give users, legislators, and others the impression that technology developed under certain guidelines are somehow certified ‘foolproof’ with regards to future ethical problems and dilemmas” (Albrechtslund 2007, p. 71).

Both Ihde and Albrechtslund point to the multistability of technologies in order to refute the designer fallacy: ‘technological artefacts can enter into many very different human–technology relations and […] a technology is defined by its particular relational context’ (Albrechtslund 2007, p. 68). One item can perform different functions and have different meanings within different social contexts either across cultures or across time. An artefact that for some reason ends up in a different context will often take on a different function and meaning. Cans of sardines that Australian gold miners in the 1930s left behind in New Guinea were for instance adapted into the culture of the natives but as shiny parts of a decorative headwear (Ihde 1990, p. 125). Without transfer of the surrounding socio-technological system, a piece of technology is left with its materiality. Because the social context differs, the push from the materiality stabilises the item in a different role. In another example, Ihde follows the bow through different cultures, seeing it become different types of bow, ranging from small ones emphasising easy use on horsebacks to larger ones emphasising precision and stationary use at the battlefield. What kind of design was seen as ‘correct’ was related to the different ideals of warfare in the different cultures (Ihde 2009, p. 16ff).

Although recognising the perils of not acknowledging multistability, I am not convinced by Ihde and Albrechtslund’s criticism. First, although the multistability of technologies does turn our attention to an important aspect of the shaping of technology, it is unclear how this concept can have a methodological role, that is, in what sense it can be helpful in order to do actual design work. For that, I find the concept too descriptive. In order to make use of it methodologically, we must understand why technologies are multistable. Ihde and Albrechtslund indicate that recognising multistability might lead to a different methodology for doing design, and a different understanding of the role of design, but for that to happen, we also need to understand the dynamics leading to divergent stabilisations.

This leads me to another reason why their objections are not convincing; it is less than clear that the positivist problem necessarily follows from the assumption that a broader informed design context could be utilised to shape the user context.Footnote 14 Rather, the positivist problem follows from understanding the link between design and user ends as a symmetrical correspondence (‘what goes into design comes out at the user end’), that is, as mentioned by Ihde, when technology is regarded in an instrumentalist manner, as a mere tool for achieving specific behavioural or normative aims. However, this is only one way of framing the relation between technology and society, the concept of technology, and ultimately, the role of design.

When technology is seen in the instrumentalist manner, design tends to focus on technology itself; revolving around the question of how best to control technology and its social consequences. Verbeek has argued that the instrumentalist notion of technology follows from an externalist view of the technology–society relation. This is the view that technology and society belongs to two different domains; they are external to each other, and consequently, are defined, constituted, separately. The task of normative and socially responsible design from this view, then, turns into the task of guarding society from the potentially ill influence from technology (Verbeek 2011, p. 11).Footnote 15 Instead, technology must be domesticated and put to our benefit in an instrumentalist manner. In this sense, the focus of the design context necessarily becomes the control of technology, which means that the instrumentalist notion indeed implies an underestimation of outside (of design) influences on the shaping of technology.Footnote 16

However, using a different framework, with a different view of the technology–society relation and a subsequent different notion of technology, it is not given that proactive and social responsible design has a positivist problem. This requires conceptualising technology and society in a more internalist manner; as interdependent, belonging to the same domain, mutually shaping and mutually constituting. Thus, in order to observe Albrechtslund’s warning and not lose proper influence over the social effects of technologies, we need to develop a different framework in which the proactive design work is formulated and executed. Doing this, the relation between technology and society emerges as a more dynamic relation than in the externalist view. Incidentally, explicating the dynamics in the technology–society relation also provides us with the conceptual tools not just to recognise technologies as multistable, but also to explicate why technologies are multistable.

However, before elaborating the interdependent view, I shall spell out two aspects of the externalist view. The assumption that the user context can be shaped through the design context is a complex one, harbouring several dimensions. The proactive assumption is therefore not a homogeneous assumption but offers several lines of attack. In the following part, I point to how these lines of attack are understood in the externalist framework, and I shall emphasise where ‘the separation of technology and society’ becomes visible.

3 Two Dimensions of the Proactive Attitude

As we saw, the positivist problem emerges when the design end and the user end is treated as symmetrical; what goes into the design end comes out more or less unaffected at the user end. This is due to the instrumentalist concept of technology that is characteristic of the externalist approach. The issue then, given this presupposition, becomes how to gain the best possible control over the user context through the design. However, this is precisely the design approach that underestimates outside influences on the shaping of technology use and subsequent consequences of use. In this part, I therefore point to some aspects of the user context that are overlooked by the positivist assumption that there is symmetry between the design context and user context.Footnote 17

3.1 A Practical Dimension

A practical dimension to consider soft impacts at the design stage is how to anticipate them. A further practical aspect is how to represent social responsible awareness in the actual design process. As an example, for telecare technologies, the latter question is sometimes seen as a matter of translating user requirements into technical requirements. This, though, is no straightforward task, as the definition of requirements might be hampered by a language gap in the communication of users’ needs. For instance, in the case of medical staff, the ‘main challenge is… to understand the needs of the healthcare staff at an early stage of the development process, so that those involved can be provided with a system they actually need for their various work processes’ (Huis in t’Veld et al. 2010). Furthermore, there is a double problem in terms of expertise; technicians know little about the needs of the healthcare staff, whereas healthcare staff is usually unaware of how the technology being developed could be imported into their daily tasks and therefore fail to express their needs in relation to it.

A ‘positivist’ approach to the practical dimension would be to presuppose that designing a telecare technology in a user-friendly manner means translating more precisely stated user requirements to more fine-grained technical requirements. Using almost the same expression as Ihde, Stewart and Williams warns against the design fallacy, which is ‘the presumption that the primary solution to meeting user needs is to build ever more extensive knowledge about the specific context and purposes of various users into technology design’ (Stewart and Williams 2005, p. 44). Although it might agree with some technologies, or some uses of technology, such correlation between user and technical requirements should not be taken for granted for telecare technologies. For one thing, whereas more precisely defined technical requirements imply that patients behave more or less identically, patients will differ a great deal in terms of circumstances. Although having the same disease, age, personality, technical skills and curiosity, education, income, family and social network and so on will inevitably vary from patient to patient, one person in need of a telecare technology can have very different requirements from the next person in need of the same telecare device. What counts as good care differ on a case-to-case basis (Mol 2006), more and more technical requirements, implying an increasingly rigid system, conflate such differences.

Furthermore, designing systems that are more rigid might underestimate what is involved in healthcare. As pointed out in script theory, innovation does not just concern functionality, but also, albeit usually implicit, the social setting in which the technologies will be introduced (Akrich 1992). For telecare technologies, this means that design entails assumptions about healthcare structure, how it is constituted and how it proceeds. In this case, a ‘positivist’ attitude would be that healthcare is a mere assembly of practices. That is to say, that healthcare can be compartmentalised into discrete functional practices that can be fulfilled with ICTs performing those functions. For instance, many telecare technologies are introduced with a promise not just to facilitate healthcare, but also to improve the quality of the care (Oudshoorn 2009). However, in order for patients to realise their expected roles as more independent, they lose some of the personal contact with their doctors and nurses, and vice versa. Some of this contact cannot be traced to specific functions of restitution and care, but is related to getting to know each other and building a relation of trust that is effective throughout the care process (cf. Wynsberghe and Gastmans 2009). Losing some of the contact points for forming personal relationships, then, might lead to a negative rather than positive impact on the quality of the overall process of care, even if the functional sub-practices are more than adequately taken care of by the telecare equipment.

Even though being a social ‘second order’ aspect, personal contact is nonetheless important for caregiving, and having that contact reverberates on how the various sub-practices is constituted and progresses. “Coming to know the patient […] seems to take place particularly during face-to-face contacts at the beginning of the care trajectory. If relationships with patients are well-established, ‘seeing the patient’ becomes less important and a first assessment of the seriousness of patients’ complaints can be done by phone” (Oudshoorn 2009, p. 396). Attaining close relationships also constitutes the manner in which the patients relate to their nurses, as they ‘play an important role in lowering the threshold for patients to call nurses in case they gain weight or experience breathing problems. This is important because an early detection of the problem may prevent immediate hospitalisation’ (Oudshoorn 2009, p. 395).

Presupposing that sub-practices in care can be replaced with automated telecare devices without disrupting the overall care process is positivist because it take for granted a symmetry between design and user contexts: define a function within the functional whole, design the technology in accordance with this definition, and expect it to behave in accordance with the definition. However, at least for some illnesses, care must be regarded as a continuous process involving ‘non-functional’ aspects, which complicates that kind of thinking. Disrupting sub-practices means disrupting the totality.

3.2 An Ethical Dimension

In considering the ethical dimension of the proactive assumption, the externalist approach surface more blatantly. For instance, this approach is at work in asking whether doing proactive design means moving away from a democratic and social shaping of technology into a technocratic shaping of society. Proactive design implies constraining the ways a technology is handled in order to limit its undesired soft impacts; is this a form of technocracy, where the social impact of technology is shaped prior to the innovation? Are technological experts (and designers) now shaping the norms and values in our society?

Moreover, somehow relatedly: does using design to steer people’s behaviour infringe upon our autonomy? Introducing telecare technologies into healthcare means that patients go from being relatively passive care receivers to be more involved in the diagnostic work, the monitoring and in their day-to-day dealing with their disorders. In other words, rather than leaving such tasks to doctors and nurses, the technologies ‘expect’ patients to fulfil specific tasks.Footnote 18 Even though this presumably is for the good of the patients, the telecare practice ‘demands’ that patients behave in certain ways throughout the day, taking precautions to be in the vicinity of the equipment when measurements should be taken, etc. This is the case not just in terms of monitoring, but also in relation to how the caretaking proceeds. Apart from monitoring the disease, the COPDdotcom, a telecare device for management of chronic obstructive pulmonary disease, also promotes a more active lifestyle among the patients.Footnote 19 This means that the equipment prompts the patient to do specific exercises. Again, it is for the benefit of the patients, but patients could also have done the same exercises according to a time schedule, or at times that are more convenient for them. The stronger impetus, coercive in the terminology from above, in having the equipment signalling when the exercises is done makes sure that more people actually do them, but in constraining the choices of the patient, the equipment represents loss of autonomy.

Both these worries presuppose a split between technology and the social in the sense that technology is deemed to pose a threat to something that is taken to be human (autonomy) and social (democracy). Such technologies are taken to represent ideas and structuring elements that enter the human realm from the outside and are therefore potentially oppressive. However, this seems to be a forgetful manner of describing what happens when we are facing a disease. As a patient, one’s daily life will already be constrained by aspects foreign to an assumed ‘autonomy’; alcohol and smoking must be renounced, a strict diet should be followed, not to mention the limiting effects of the disease itself on the daily life. Furthermore, not just constraining our behaviour, the technologies also open up possibilities. Telecare technologies enable patients to stay at home rather than being hospitalised or spending lots of time travelling back and forth to hospitals and clinics (Vollenbroek-Hutten et al. 2010). Treatment at home can increase the effectiveness of the disease management, for instance in terms of intensifying the training, which in turn might enable the patient to expand the possibilities for movement, rather than limit it.Footnote 20

As I shall elaborate below, this taps into a very basic aspect of the technological presence in our lives; technologies always come with a constraining aspect, they will limit some aspects to how we see, understand and act in the world. But, at the same time, they will also open up possibilities, enable us to see and act in the world in new manners. Emphasising the first characteristic in relation to telecare technologies can lead to situations of distrust; where the telecare device appears as intruding, a strange presence in the patient identity, and thus invite alienation. Emphasising the second, on the other hand, sees the technology as allowing latitude for the patient to construct a patient identity with the equipment as an integral part (Kiran and Verbeek 2010). In reality, though, it will not be a matter of either/or since different people react differently to the technological presence in their illness. Apprehending the enabling–constraining nature of technology demands, though, to move away from the externalist framework and into an interdependent framework.

In this section, I have discussed two different dimensions of the proactive assumption, showing some of the shortcomings in assuming symmetry between the design context and user context and in attempting to control the user context through design. As we saw, assuming symmetry (leading to worries about autonomy or to a specific notion of the care process, etc.) discloses that the externalist approach is at work; technology and society are seen as distinct, and technology’s role attains either that of being a threat or as a mere instrument. This is the approach that leads to the positivist problem. In developing a methodology to proactive design, we therefore need to avoid framing our understanding of the proactive assumption in those terms.

4 Interdependence: Technology as a Revealing–Concealing Structure

Both Ihde and Albrechtslund point to multistability as a crucial aspect to the notion of technology that is left behind in the instrumentalist definition. However, they fail at explicating what multistability really means, nor do they discuss how the concept can inform a proactive methodology. In this article’s last part, I present three terms—constitution, articulation and trajectories—that together clarifies why technologies are multistable, by way of which the concept’s methodological role also is elucidated. In short, this discussion explicates the dynamical relation between technology and the social.

As we have seen, to avoid the positivist problem, we need to steer away from the view that sees technology as external to society. However, technology is not subordinate to societal forces either; it poses a certain resistance to being shaped. Understanding technology means understanding technical mediation; how technologies affect the relation between ourselves and the world around us. I shall outline some key features of technical mediation, which goes to argue that technology and society not just interact, not just shape each other, but mutually constitutes each other; neither is understandable without the other, and neither is effective without being both enabled and constrained by the other. In other words, technology and society are interdependent.

4.1 Constitution

An interdependent relation differs from an interactive relation in that the latter indicates that the dealings of the participants, their internal relation and the task they perform, are based on inherent or autonomously defined functional properties—properties that remain principally unaffected in the interaction. Interdependence, on the other hand, means that technologies, users and various contextual phenomena, including practices, all bring qualities that significantly affect the appearance of the other factors. A carpenter is not ‘a person plus a hammer’, but is a constituted entity, interdependent on the person, the material properties of the hammer, the task at hand, the practice the carpenter is immersed in and other aspects that enable and constrain the carpenter’s job. For this reason, it is suitable to talk of technical mediation as being performed by a constituted totality.

This means that the constituents of the mediation appear for each other in specific ways. That is to say, technical mediations are not merely composed of technology, person(s) and social aspects, but that what any component offers is shaped by what the other components offer to the very same situation. For instance, a carpenter can seem proficient when using a hammer, but might turn into an apprentice if presented with a nail gun, a different tool, but with more or less the same function. Technology, user and social factors accentuate specific characteristics in the other components. A person in a technical mediation is not, then, the person per se, but is the person as constituted as a specific something. The skills, the knowledge, the competencies and the beliefs a person uses actively in dealing with a technology would not necessarily have mattered in a different technical mediation.

A user is not a given entity; a person is turned into a user in being presented with a potential use, a possibility of the material item. To become a shooter, Bruno Latour tells us, there must be not just a person, but a person and a gun; the gun constitutes the person as a ‘shooter’. The gun brings out this potential aspect of the person. However, this is reciprocal. Whereas a technology constitutes a person as a specific kind of user, the person, situated in a specific social setting and with a personal history, elicits specific aspects of the material item, constituting it as a specific type of equipment (Latour 1999, p. 176ff). Correspondingly, the context is not just everything that surrounds a technical mediation. For instance, Heidegger (1962) argues that a hammer points out its social and physical context; those things that cannot be hammered on are simply not part of the context of tool (p. 97f/68f). So, only by being in a ‘hammerable’ context is a hammer a hammer; outside of it, it can be a paperweight, a window smasher or a murder weapon, but, as mentioned, the context is in turn relating to the ‘allocating power’ of the hammer. Technology and context, as well as technology and user, are mutually defining. A technical mediation therefore is a totality; comprising constituents that are themselves only constituted through the same totality.

In ending this section, I should mention two things. First, although the co-constitutional relation comes across as a rather abstract conception, I believe it reflects how we deal with various technologies that enter our lives. We expect a technology to have a certain flexibility, so that it can be adapted to the kinds of practices we are involved in. If the technology lacks this flexibility, we might reject it altogether. For instance, telecare technologies have a low penetrability into healthcare because practitioners find them ill suited to their situation, and not flexible enough to be taken into existing care practices.Footnote 21 On the other hand, we are prepared to adapt ourselves to it. The flexibility of the technology is constrained because it needs to function in specific manners, so that in order for us to make use of it at all, we acknowledge and relate to its more resistant aspects. Furthermore, we expect that the degrees of flexibility will differ due to various circumstances. An everyday tool like the mobile phone is something we expect to be able to personalise and ‘take’ full control over, whereas facing an illness and a subsequent period of care and recuperation might make us more willing to adapt to the requirements of a technology. How we react and how we constitute ourselves in relation to a technology are related to both technical and situational aspects.

The second thing to be mentioned is that the co-constitutional relation might be taken to imply a highly relativist position. After all, if everything is de-substantialised, what happens to meaning; how can we understand and predict anything in our surroundings? In fact, rather than pulling in this direction, the co-constitutional relation reflects how meaning is made. As the hermeneutic tradition teaches us, meaning is not something static; we do not understand the world in terms of essences. The co-constitutional relation points us not just to an understanding of meaning as constructed, but as indicating how it is constructed. However, in order to grasp this line of reasoning, we need examine the concept of articulation.

4.2 Articulation

The above have significant implications for our understanding of the outcomes of technical mediations. Whatever object (event, phenomenon, idea, plan and prototype) results from a technical mediation is no mere object, but should be considered an articulation; articulated according to both the situated requirement (the task at hand) and to the totality engaged in the technical mediation. Through technologies with other properties, alternative contexts, or through users with other preferences, competences and knowledge, the technical mediation would have articulated a different kind of object. However, it is important to note that it is not necessarily less accurate; it would have been a different way of presenting the object, suited to the particularities of that mediation. ‘Articulation’ does not imply arbitrariness, but indicates how an object is instigated to present itself.

An X-ray image can serve as an example of the non-arbitrariness. The X-ray image results from the encounter of the X-ray technology, the competent use by an X-ray technician and the X-ray practice and the body itself. This is the constituted totality that is engaged in the technical mediation. Decisive aspects here are, for instance, why an X-ray is taken at all, the function of the X-ray equipment, whether the technician can operate the equipment properly, and so on. The X-ray image produced is a certain way of representing the body. It is not the ‘truest’ or ‘most objective’ representation of the body, but is a functional mode of the body that reveals certain aspects relevant for the task.

The representation can hardly be called a pure objective depiction of the body, dependent as it is on the specific technical constraints of the equipment, but it would likewise be absurd to regard it as an arbitrary construction, as it is clearly enabled and constrained by bodily properties. The technology focuses on and enhances and augments and translates certain aspects of the body, while at the same time plays down or ignores others. It reveals specific aspects of the body and at the same time, it ignores, conceals, other possible ways of representing the body. Technician, technology and body are in this case mutual constraints. An X-ray does not invent something that is not there, but projects a certain functional perspective on the body. It is an interpretation prior to what we normally would label the interpretation of an X-ray image.

‘Articulation’ indicates the way we can become acquainted with something; only in as much as something is articulated, that is, as standing out from that which it is not, can we approach it. From this, it follows that articulation should not be seen negatively, as the ‘only extent’ that we can know something, but in the positive way as the condition for us to know anything at all. The claim that a scientific image, fact or model represents reality or conversely, that it only presents a constructed reality are arguments that are performed over a set of shared, but ill-conceived presuppositions. If something is said to be a mere constructed reality, this presupposes that there is something called reality somewhat behind or beyond it. Instead, regarding technically mediated ‘products’ as articulations means that we use technology as a way of making reality. Use of technology should not be seen as either granting us ‘direct access’ to a pure nature-in-itself, nor should it be seen as putting a veil over this nature-in-itself. There is no such thing as a reality that we, by the proper means, can describe in ‘pure’ objective terms. The articulation is for this reason not a reduction or a diminished form of reality, but is itself reality.Footnote 22

4.3 Trajectory

‘Articulation’ discloses that technical mediation involves a revealing–concealing structure; a possibility is revealed, in a specific way, and inevitably, another possibility is concealed. Often, the revealing aspect will be taken up and conventionalised into a practice, meaning that the specific kind of revealing a technological item contributes to is stabilised. We then think of it as the function of the technology and rarely reflect on the revealing–concealing aspect of the mediation in which the item is involved. In hindsight, the development towards closure and stabilisation might appear as being a ‘logical’ or ‘natural’ trajectory, but there is no necessity in stabilisation; things could have been revealed (and concealed) differently.Footnote 23

Actually, instrumentalism might be seen as an effect of stabilisation; technologies are so effortlessly embedded within a certain practice that they appear only to fulfil a function within this setting. When their mode of revealing has become stabilised, their co-constitutional role is itself concealed within the contextual whole. The result is double forgetfulness; we forget that which is concealed, and we forget why it is concealed. To fully grasp this aspect of technical mediation, we must consider how a socio-technological trajectory reinforces a stabilised function, and by way of that, reinforces both the mode of revealing and the mode of concealing.

Once a technology is stabilised, the direction for further development is partially laid down, in the sense that the potential found in the technologies opens us up to specific ways of developing these technologies further (Ihde 2002, p. 106ff). Galileo Galilei’s first telescope had a magnification of ×3, while his last magnified ×32; having established the utility of the first, further refinement was motivated. This does not mean that we suddenly find ourselves as obliged in any way to follow this trajectory. Instead, we should regard trajectories as possibilities that are opened up, revealed, by the technology. Whether, and to what extent, a technology is further developed depends on social, economic, ethical, political and scientific processes in the constitutional manner described in section 4.1. An example here is the possibilities for mass production allowed by the invention of the moving assembly line. Such a technological revealing contributes to a social pull in specific directions.Footnote 24 Another, more curious example is how the industrial manufacturing of soap somehow led us to feel a necessity in being clean. This social pull for cleanliness has not always been present (Winner 1977, p. 84).

Once a technology is stabilised in a socio-technological system, what is regarded as its strengths and weaknesses are tinkered with. How should we make the piece of technology better? How should we eliminate those aspects of the item that distract us from its primary function? Tinkering can be done ‘positively’, when we try to make the technology better, such as faster computers, increasing RAM, digital cameras with higher resolution, etc. The same trend can be found early and throughout in humankind’s tool use: better stone hammers and better spears and cutters. Galilei’s use of the telescope created a ‘need’ for better and stronger telescopes, eventually ending in modern radio and gamma ray telescopes.

Another type of trajectory is set off when we react to unhappy consequences of innovation. Take the example of pollution, an effect of industrial and travelling technologies. Addressing this problem involves many socio-political and juridical actions and, of course, new kinds of technologies developed precisely to meet these unhappy consequences: decontamination units, biofuels, catalytic incinerations and so on. Trajectories are an integral aspect of the revealing–concealing structure of technology; once we are capable of saying what something is for and what it is able to do (positively and negatively), we are also able to analyse the parameters that should make it better and more powerful, or less harmful. Regardless of whether it is an enhancement or a reduction, socio-technological development takes a certain path relative to how technology reveals.

An important aspect of trajectories is that their potential path is very hard to predict. For instance, these days we know that nanotechnology and nano-engineering will bring sweeping changes to a variety of aspects of our society (medicine, foods, environmental issues, etc.), but this field of research is still embryonic so we do not know the full potential—positively or negatively—of the ‘nano-trajectory’; not in the technological sense nor in terms of its soft impacts. It is precisely this insecurity that has spurred on ELSA research and initiatives like Responsible Innovation.

Langdon Winner takes another less optimistic approach when he asks whether it is ‘wise to experiment with technological applications likely to produce irreversible effects’ (Winner 2003). In view of the concept of trajectory, he has a point; once society has started ‘down’ a trajectory, it can be hard to turn around. It is an unavoidable side to innovation that parts of a possible world is concealed; any revealing is a simultaneous concealing. Socio-technological trajectories only reinforce this, which further substantiate the notion of the co-constitutional role of technology on our society. However, Winner’s remark should not be taken to imply that we become caught up in a technologically determined trajectory; rather it should be taken as a methodological remark on par with Collingridge’s dilemma. We need to be aware that some choices set us off down a trajectory that might be very hard to decelerate. Not necessarily because of technology ‘itself’, but because of the socio-technological structure that tends to become organised around innovation.

5 Concluding Remarks

The revealing–concealing structure of technology is the key conceptualisation in the interdependent framework. It necessitates a different approach to the proactive assumption than the externalist framework. Regarding technology and the social as interdependent, as mutually constituting, should eliminate notions following from the externalist framework. Attempting to constrain unhappy soft impacts through the design stage will, in the externalist framework, be regarded as a matter of controlling technologies in an instrumentalist manner. Nor is technology a semi-autonomous force with a potential for getting out of control if given half a chance, as indicated in the Collingridge’s dilemma. Technology has an impact on the shape of society, yes, but this revealing–concealing style of impact must be acknowledged and dealt with as such; not exaggerated nor overlooked.Footnote 25

We can see this in the approach to the ethical dimension of the proactive assumption which follows from the revealing–concealing structure. Being in a world that is revealed to us partly because of the technological presence makes it futile to uphold the view that autonomy means having an independent relation to technology. A patient who is enabled to stay at home because of a telecare device gains a certain amount of autonomy compared to being hospitalised (or spend hours on end travelling back and forth). This autonomy might feed into the patient’s self-conception, potentially empowering him, lessening the feeling of being a patient. Our understanding of autonomy must take on a different form, as a way of relating to the technological presence, rather than as a mere choice between accepting or rejecting it. Similarly, co-constitution is (by default) a democratic relation, so rather than regarding a proactive approach as posing a problem for democracy, the interdependent view shows us how such a democratic relation can proceed. However, we need to acknowledge that when we shape technology, we and our ways of behaving, perceiving and acting are also shaped by technology. To shape technology, we need to trust ourselves to technology.Footnote 26

However, it is the implications for the practical dimension of the proactive assumption that is most important for the actual design work. The interdependent view implies a methodological insecurity to the design process. Through the revealing–concealing structure, we see why this is so, but what does it mean for the practical tasks of design? For one thing, it means that we should not choose an approach that strives for control; we will of course be able to anticipate certain facets of the user context, perhaps even most of them, but we can never know the user context. We will always be uncertain about the precise effect a technology has when it is introduced to a certain practice: what it will reveal in terms of potential trajectories, in terms of ways of relating to each other, and in terms of its effect on values and norms. Methodological insecurity should be taken as the cue to design rather than as a weakness of it.

Contrary to what one might expect, presupposing insecurity can lead to a higher degree of influence. We will then be forced to relate more deliberately to the implementation stage of innovation; how the technology is introduced to the social setting. Since we should not just expect a technology to be used in a manner that is symmetrical to its design, methodological insecurity could for instance force us to consider additional measures to shape use than merely through the design and development stage. Methodological insecurity therefore means not underestimating the ‘outside’ influences on the shaping of a technology and its further effects, which, we saw, is the problem for positivist approaches.

More concretely, this should create an awareness of the shaping process outside the actual design work. In the case of telecare technologies, we might take this as a cue to think of supplemental contextual methods when they are introduced to a care setting, that is, to leave an opportunity for involved implementation. Patients and healthcare personnel might be given more responsibility in shaping their own relation to the telecare devices. As I said above, what connects two patients with a heart condition might just be the disease; apart from that, their lives, needs and requirements might differ profoundly. Being given the opportunity to co-shape the conditions for their own involvement might also create more responsibility in the patient. Rather than exerting this responsibility from the ‘outside’, it can then be constructed from the ‘inside’. Successful adaptation of telecare (and other kinds of) technologies is not just down to how the technologies are designed, but also in how they are introduced to users. However, this stage must also be anticipated at the design end in order to allow for the co-construction of the ways a telecare technology impacts on the healthcare process and the overall life of patient.

A possible positive side effect to implemental flexibility is that patients and healthcare personnel co-construct the values and norms that surround their use of telecare devices. Relating ethically sound or socially responsible to the technologies would then no longer be a matter of saying ‘yes’ or ‘no’ to them, but a matter of relating, actively, to them, in order to give the values and norms (and other trajectories) a shape (Kiran and Verbeek 2010; Verbeek 2006a; 2011).