1 Introduction

Intelligent, autonomous e-coaching systems are becoming more and more mainstream, offering people a wide variety of strategies and techniques intended to help them fulfill their goals for self-improvement (Blanson Henkemans et al. 2009; Klein et al. 2011; Kaptein et al. 2012). While these innovative systems offer new and exciting opportunities for individualized coaching in a range of different domains, they also highlight a gap in our current understanding of the intimate relationship between an e-coaching system on the one hand, and a human user on the other hand, and the effect that this relationship has on the user in terms of his or her self-directedness, or autonomy. As Oinas-Kukkonen and Harjumaa have rightly observed, ‘information technology is never neutral’ (Oinas-Kukkonen and Harjumaa 2008, p. 166), implying that autonomous support systems are always nudging people’s behavior in one direction or another by the type of information they present and the way in which they present it.

This aspect is amplified in autonomous e-coaching systems, especially those that combine persuasive techniques such as reduction, tunneling, tailoring and self-monitoring (Fogg 2003) with personalization (Berkovsky et al. 2012) to actively influence their user’s behavior in order to achieve lasting behavior change. A prominent example of such a system is Klein, Mogles and Van Wissen’s eMate (Klein et al. 2011), which promotes a healthy lifestyle for people managing chronic illness (e.g., diabetes type 2, HIV or cardiovascular disease) by inferring the person’s behavior change state from individual measures and sending tailored motivational text messages to influence that state if deemed necessary. Clearly, there is a positive drive behind these innovations, but what is striking is that there seems to be very little awareness (except for a meta-study by Torning and Oinas-Kukkonen 2009) that such systems are in fact interfering with people’s decision-making process by directly or indirectly offering suggestions for action. This interference raises ethical concerns. Given that ‘being an autonomous person’ seems to entail that one decides on the basis of options that are in some relevant sense ‘one’s own’, the question is whether such interference, despite explicit consent, might run the risk of negatively affecting people’s autonomy, and by extension, their well-being (Ryan and Deci 2000).

This paper has three distinct aims. The first is to generate awareness with system designers that autonomous e-coaching technologies have advanced to a point where the suggestions for action that a system offers seriously affect the options that users consider. In order to show this, we build on recent work on option generation as found in diverse disciplines such as philosophy, psychology and computer science. Although this interdisciplinary approach might at times seem to complicate issues, we believe that e-coaching developers can strongly benefit from both the conceptual distinctions made in philosophy and the empirical findings gathered in the different fields discussed. On the basis of such findings, it will for example become clear that especially with systems that interact intensively with a user, it quickly becomes difficult to distinguish between those actions that were generated independently by the user, and those that were steered (guided) by the e-coaching system. Rather than casting judgement on whether steering (guiding) is good or bad in general Footnote 1, the paper is concerned with the implications of the interplay between a person’s options and the e-coaching system’s suggestions. The second and third aims are to show that understanding this interplay is crucial with respect to the effectiveness of the system (Andrew et al. 2007) and the ethical soundness of the system (Torning and Oinas-Kukkonen 2009), respectively. This work also offers some preliminary thoughts on how to think about making the right type of suggestions.

The structure of this paper is as follows. Section 2 reviews and discusses the growing attention for option generation in different disciplines engaged in the study of decision making. In Sect. 3, we explain how, on the basis of the research done so far, the notions of ‘option’ and ‘option generation’ can be understood. In the three sections thereafter, we argue that e-coaching systems have the ability to influence the options that people consider (Sect. 4) and that understanding the process of generating options as well as the interplay between e-coaching and option generation is important for designing and developing e-coaching systems that are effective (Sect. 5) and respectful of people’s autonomy (Sect. 6). Finally, in Sect. 7, we conclude with a sketch of the practical implications of this work and offer suggestions for further research.

2 Existing work on option generation in decision-making research

Although most situations seem to allow for countless possibilities for action, there are limits to available information, cognitive capacity and time that cause people to consider only some of these as actual options, while ignoring many others (Simon 1991). However, this raises the question: How does one actually generate a set of viable options for action? Footnote 2 This important question has strangely enough been ignored, or at least undervalued, for a long time (Kalis et al. 2008; Smaldino and Richerson 2012). The more general question ‘which factors guide human decision making?’ on the other hand, has been studied extensively in different disciplines ranging from philosophy and psychology to behavioral economics and computer science. In this section, we aim to show how current research on decision making in these different disciplines is slowly increasing awareness that there is a need to gain more insights into processes of option generation.

Contemporary philosophical discussions on action focus on questions such as what distinguishes acts from ‘mere behavior’ (e.g., Thompson 2008; Setiya 2009) and how decisions and intentions can lead to action in the physical world (Mele 2009; Buckareff and Aguilar 2010). However, most theories presuppose that people are able to see options for action, choose one of them and act accordingly. The question of how people generate options for action is only recently gaining more philosophical attention (Kalis et al. 2008; Illies and Meijers 2009; Smith 2010). Smith, for example, has introduced the notion of ‘practical imagination’ as the capacity of human beings to conceive or ‘see’ certain possibilities in their environment. However, he too argues that the question why we ‘see’ certain possibilities and not others has not been given sufficient thought in philosophical theories of decision making. One plausible approach to take toward an answer is to analyze research on the role of emotions in decision making. It is well known that emotions can make certain aspects of the environment ‘stand out’ as particularly salient or attractive, and as such, they might play a guiding role in the generation of options for action (Gibson 1979; Damasio 1999).

In other disciplines, a similar trend is discernible. Over the years, empirical researchers in behavioral economics and psychology have performed a great many studies to learn more about people’s choice behavior (e.g., Thaler 1980; e.g., Thaler 1994; e.g., Kahneman et al. 1999; e.g., McGraw et al. 2010). However, many of these studies only consider a single decision-maker dealing with a well-defined problem space where the options for action are either limited by the bounds of the experimental setup or presented as a given. Consequently, the option generation phase of decision making has often been confused with option selection or ignored altogether, leading to the undervaluation of option generation in this literature as well. Notable exceptions are work by Gettys et al. on hypothesis generation (Gettys et al. 1986) and Klein et al. on option generation of skilled and non-skilled chess players (Klein et al. 1995).

In everyday life, however, people are often confronted with choice scenarios where options are not simply given, but have to be generated. As Keller and Ho observed early on, ‘many real decision tasks are ill defined, i.e., the options, attributes, outcomes and states of nature are not yet specified’ (Keller and Ho 1988, p. 715). Such tasks force people to use heuristic strategies, such as the representativeness heuristic of measuring ‘its similarity to a set of common or previous problems stored in their long-term memories’ (Keller and Ho 1988, p. 717) or the availability heuristic where people ‘assess the likelihood of risks by asking how readily examples come to mind' (Thaler and Sunstein 2008, p. 27). When faced with unfamiliar ill-structured problems, however, one cannot use heuristic strategies because there are no ‘prototypical or causal patterns [...] stored in long-term memory,’ meaning that ‘a menu of options is not readily accessible in memory and actions cannot be quickly retrieved by searching memory’ (Keller and Ho 1988, p. 718). As Kalis et al. (2008) note, it is plausible that this difference in familiarity conceptually corresponds to the effort that is needed: in familiar or well-constrained situations, option generation requires less effort and might rely more on processes associated with—more or less automatic—retrieval from long-term memory, whereas in unfamiliar or complex situations option generation is more effortful and therefore relies more on processes associated with executive function. It is these kinds of cases, where the unfamiliarity, together with the openness of the scenario force people to think of new options, that have not received the scientific attention they deserve (Johnson and Raab 2003; Ward et al. 2011; Smaldino and Richerson 2012).

Very recently, Smaldino and Richerson have distinguished a range of factors involved in the process of option generation in humans to clarify the problem. First, they acknowledge an important role for the environment, stating that options ‘are constrained by the potential behaviors afforded by the environment’ (Smaldino and Richerson 2012, p. 4). Secondly, there are psycho-biological factors such as perceptual biases, personality traits, affect, cognitive biases (e.g., framing and anchoring effects Kahneman and Tversky 2000), sex and age. Finally, there are sociocultural factors that play a role such as the drive to be social, imitation, emotion contagion, communication and culture itself. Their contribution is a positive indication that the gap in our understanding of option generation has been acknowledged and that researchers are working hard to overcome this gap. However, Smaldino and Richerson will be the first to also acknowledge that there is still a lot of work to be done in order to acquire a full understanding of the factors involved.

Within the domain of informatics and intelligent agent systems, decision making is an important area of study (e.g., Lakhmi and Nguyen 2009; e.g., Kamphorst et al. 2009; e.g., Gal et al. 2010). In this context, option generation has always played a role by necessity; for an agent system, there is simply no escape from having a mechanism that generates options. Interestingly, though, this phase has often been taken together with action selection architecture. Take for example a paper by Franklin and Graesser in which they write that to describe an autonomous agent, one has to describe its environment, sensing capabilities, actions, drives and ‘action selection architecture’ (Franklin and Graesser 1997). In agent systems, option generation will often involve a type of search algorithm that goes through facts about prior experiences, comparing characteristics against ones from the current situation. A selection mechanism can then try to predict the outcome of each of the (limited top set of) options and weigh those outcomes to either make a decision to select and exploit an option or to explore further options. However, even though similar accounts have also been proposed in models for human decision making (e.g., Daw et al. 2006; e.g., Cohen et al. 2007), such approaches do not sufficiently capture the complexity of the option generation process in humans (Smaldino and Richerson 2012).Footnote 3

In the young field of persuasive technology, options also play an important, but often implicit role. Viewed from a decision-making perspective, influencing the options that a person considers is the primary target of systems that generate suggestions for the user to follow. Consider for example the system developed by Kaptein et al. that targets snacking behavior by tailoring text messages (SMS) on the basis of an individual measure of susceptibility to different social influence strategies (such as commitment or authority) (Kaptein et al. 2012). Such systems specifically aim to influence the options that the user considers (in this case, the system tries to accomplish that the user will not consider options that involve snacking). So far, however, this aspect of the intervention has not been explicitly discussed in the literature.

In this section, we have shown that in the various disciplines studying decision making, the process of option generation has received relatively little attention. In this paper, we wish to show why this undervaluation is not justified and that designers of autonomous e-coaching systems should take care to consider the user’s options. However, before we can develop our main claims on how e-coaching affects option generation, there are fundamental questions that should be answered first: What exactly are ‘options’ and what do we mean by ‘option generation’? These questions will be addressed next.

3 Options and option generation

Existing studies on option generation use the term ‘options’ in different ways. Nevertheless, most authors seem—at least implicitly—to adopt the view that options are representations of candidates for action (Ward et al. 2011; Raab et al. 2009). We share this view and will argue in this section that options are a special subset of action representations. We use the notion of action representations to indicate anything (descriptions, images, objects) that represents an action. The proposal presented in this section is based on a conceptual analysis developed in Kalis et al. (2013). It should be noted that this analysis is not itself based on empirical findings, but should be seen as a conceptual proposal for structuring future studies on option generation and e-coaching.

The first aspect of our analysis of options is that they are candidates for action as seen from the perspective of the actor. This means that options are distinct from objective possibilities (Kalis et al. 2008, 2013). For example, when someone considers to either watch a movie or read a book, going grocery shopping is not an option, even though from a third-person perspective it could be ascribed to that person as a possible action to perform.

So, on our proposal, the actor must consider a certain perceived possibility as a candidate for action, in order for it to count as an option. This also implies that options are not neutral action representations, but representations of actions with a certain affective value. Options are not just representations of possible things one could do (e.g., stop in the middle of the street and stand on one foot for an hour) but representations of possibilities that one actually considers. That is, action representations that have at least some positive value for the actor.

To further explicate what options are, it is useful to relate them to more familiar constructs such as goals, intentions and plans. Options, we contend, are different from goals in that even though options have at least some positive value for the actor, it is not the case that people intend to bring about every option that they consider. Taking an option into consideration does not imply any form of commitment to actually realize it. Goals, on the other hand, are often defined as representations of desired end states that one intends to realize (Kruglanski and Koepetz 2009).Footnote 4 For the same reason, options are also different from intentions and plans. As Bratman has noted, both intentions and plans imply commitment and a certain level of inertia, or unwillingness to change. That is, people generally stick to their intentions and plans without renewed deliberation (Bratman 1987, 2007). Options, however, are very different in this respect. They are more than just action representations in that they have affective values attached to them, but they lack the type of commitment that intentions and plans typically have. Options are fleeting, ready to be rejected at the action selection stage of decision making.

Now that we have provided a conceptual account of options, we turn to the process of generating options. In Kalis et al. (2013), it is argued that most researchers do not think about option generation as a distinct psychological process, but that there probably is a set of different psychological processes providing an actor with candidates for action. The processes involved might differ depending on for example the familiarity or complexity of the decision-making situation. In order to identify the relevant processes, much more empirical work is needed; the limited research available so far suggests that memory retrieval at least seems to play an important role (Klein et al. 1995; Kaiser et al. 2013). For a more detailed description of existing work, see Kalis et al. (2008, 2013).

Our definition of options seems to indicate that in typical cases of decision making, generating options precedes the intention formation and action stages. This does not imply, however, that options cannot be informed by previously made decisions and plans. On the contrary, it is very plausible that as a decision-making process progresses, option generation processes become increasingly constrained by choices already made. To illustrate this, consider the planning of a holiday. Early on, one may consider many countries as viable options to travel to, but once the decision has been made to travel by car, one’s range of options will be automatically constrained by that decision.

One might at this point be inclined to think that we understand option generation as a form of conscious deliberation. However, as stressed in Sect. 2, this is not the case. Just as there are both conscious and implicit forms of decision making, so too are there conscious and implicit forms of option generation. For example, in an experiment with medium skilled handball players, Johnson and Raab found that ‘participants were not explicitly using particular strategies to produce their first choice, generated options, or final choice, indicated by many (over 30 % of participants) “reactive” responses such as doing “what came to mind first” or responding “by intuition”’ (Johnson and Raab 2003, p. 223). What we suggest is that there is option generation whenever there is decision making (either implicit or explicit) and that option generation can be understood as a dual-process model (cf. Verplanken et al. 1998; cf. Kahneman 2003; cf. Wood and Neal 2007). See Fig. 1 for a schematic figure of such a model.Footnote 5 It consists of one direct feed that does not need any explicit attention (cf. Kahneman’s fast and automatic System 1), and one that is mediated by some attention-based, deliberative mechanism (related to Kahneman’s System 2). This duality in the model explains how at times people have a set of options to choose from without having to consciously generate options, whereas at other times people ‘stop and think’.

Fig. 1
figure 1

A dual system model of option generation

4 How e-coaching affects the options people consider

So far we have shown that there is a growing awareness in different academic disciplines that option generation should be incorporated in models of decision making. In addition, we have proposed a way of understanding the notions of ‘option’ and ‘option generation’. This brings us to the first main aim of this paper: to show that e-coaching systems influence the options that people consider.

Autonomous e-coaching systems are a class of decision support systems designed to assist people with self-improvement in a variety of areas (Warner 2012). Early knowledge-based decision support systems (KBDSSs) offered suggestions for decisions in complex, but reasonably well-defined problem spaces, aiding for example medical diagnosis by processing clinical symptoms (e.g., Buchanan and Shortliffe 1984; e.g.. Barnett et al. 1987). These suggestions, generated on the basis of rules, were either exhaustive for that state of the domain or filtered on the basis of heuristics for structuring the problem space or optimizing the outcome. Today, there are decision support systems that are much more sophisticated and deal with uncertainty (Leal de Matos and Powell 2003) and changing circumstances and environments (Ren et al. 2009; Pillac et al. 2012). However, what we are concerned with in this paper are systems that offer individuals personal coaching in a domain of their everyday lives for an extended period of time. Systems that, partly due to the recurrent nature of the interactions, users will develop complex relationships with that involve reciprocity and trust (Pruitt and Carnevale 1993; Lee and Nass 2010; Van Wissen et al. 2012). Footnote 6 These systems have the extremely difficult task of making ‘the right suggestion at the right time’, while it is often not unequivocally clear what would make a suggestion the right one. Relevant is not only whether a proposed action fits the person’s preferences and values and whether it contributes to the fulfillment of a goal, but also how the proposed action relates to the options that the person is already considering.

Autonomous e-coaching systems influence people’s decision making by offering suggestions for action in the hope that the user will seriously entertain the idea of acting upon the suggestion (i.e., see it as an option). Using different techniques for targeting automatic as well as deliberative decision-making processes, these systems try to persuade people to behave in a certain way (e.g., making healthier food choices). But regardless of whether the system targets automatic or deliberate processes, it will necessarily affect the options people (consciously or not) consider. Let us explain with two examples.

Consider Alice, a woman who is tired of being overweight, but who nevertheless has a hard time making decisions that will benefit her health. To help her achieve a healthier lifestyle, Alice has employed an e-coaching system that motivates her to be more active throughout her day. For instance, when she enters the building where she works on the second floor, the system picks up her location by GPS and quickly derives the conclusion that Alice is faced with a decision to either take the elevator or the stairs. In line with Alice’s goals, the system suggests she take the stairs via a text message (e.g., ‘it would be great if you would take the stairs instead of the elevator’).Footnote 7 After having read the message, there are two possibilities open to Alice, just as there were before: She can either take the stairs or the elevator. Her options, however, will have changed. Either they will have changed in number—for example if she had only automatically generated the option of taking the elevator and only now considers taking the stairs—or in affective value. For the suggestion will likely have reminded Alice of her goal to be healthy and her intention to be more active throughout the day. This goal activation will at the very least lead her to reevaluate her options.

In a similar vain, an example can be given of a system influencing a person’s options in unconscious decision making. Suppose Bob has employed an e-coaching system that helps him go to bed on time. If the system slowly dims the lights in the room, then that is a natural cue for Bob that it might be time for bed. Bob might not notice it consciously, but the system is making a subtle suggestion, leading Bob into a state in which he is not likely to consider starting another activity. As was the case with Alice, Bob will entertain different options and will also value certain options differently because of the system’s suggestion.

So, it seems that e-coaching is tightly linked to the options people consider. But why is this important? We argue that it is important for two reasons. First, because the options a person considers will matter for the effectiveness of the system. Just consider a reversed scenario for Alice, in which she was only considering taking the stairs when she arrived at work. However, receiving the text message makes her distinctly aware of the tempting option to take the elevator, which she decides to take as a reward for eating a light breakfast. In this case, the e-coaching system’s suggestion had the adverse effect! We come back to similar cases in Sect. 5. Secondly, it is important because generating one’s own options for action appears to be an important factor for being an autonomous person who chooses his or her own path in life. If interventions from e-coaching systems interfere with the options people consider, an account must be given either of how such interventions can be justified, or why the worries are ungrounded. We return to these difficult ethical concerns about autonomy in Sect. 6.

5 Designing effective e-coaching systems

In this section and the next ones, we want to focus on the following normative question: What kind of suggestions should e-coaching systems offer to their users? The main problem is that it is far from obvious what makes a particular suggestion a good one. As Andrew et al. have said, suggestion technology is about kairos: ‘providing the right information at the best time’ (Andrew et al. 2007, p. 259). But what exactly constitutes the right information?

To answer this question, it would be very instructive to understand how humans generate their options—especially in unfamiliar, ill-defined choice scenarios—in order to mimic and possibly even enhance this process in the e-coaching system. As shown in the previous sections, there is a growing body of relevant research on option generation in different fields such as philosophy of action, psychology, behavioral economics and computer science. Having a better understanding of this stage of the decision-making process could help answer the question what type of suggestions a system should make in order to help the user to attain his goals. For instance, should a system make suggestions that are very much in line with the user’s behavior of the past or should it make suggestions in line with the user’s goals, even if that clashes with past behavior? And should the system make suggestions that users would never have thought of on their own or ones that the user would have generated, but that were not salient enough to be selected? The point here is that ‘the choice of tactic [for making suggestions] to use for a particular product will be important to the success of the product’ (Andrew et al. 2007, p. 260). So, in short, without a proper understanding of the human option generation mechanism, it will be extremely difficult to know what kind of suggestions the e-coaching system should offer to be most fitting for the individual. Footnote 8 Knowing this is very important, however, because the more fitting the suggestion, the more persuasive power the system has (cf. the suggestion principle in Oinas-Kukkonen and Harjumaa 2009). In addition, making the right suggestions will improve the perceived expertise of the system, which will also boost the system’s persuasive powers (cf. the expertise principle in Oinas-Kukkonen and Harjumaa 2009).

We think that (part of) the answer lies in focusing on the options that a person might generate on one’s own. That is, one of the things an e-coaching system should take into account when preparing to make a suggestion is the range of possibilities a user might consider as options. To see how a lack of such information can lead to difficulties, consider Carol, who employs an e-coaching system to help her improve in making sound financial decisions. Suppose she is looking to invest an inheritance from her late uncle. If her e-coaching system suggests to make a risky but possibly highly profitable investment that she, being a risk-averse person, does not and would not consider an option, she will either make a decision that goes against the grain or ignore the advice she specifically employed the system for. In either case, it is likely to harm the trust relation between her and the system.

Another example is that of Dave, who desires to be the kind of person that leads an active lifestyle, but who needs external nudges to actually start being more active. In order to achieve his goal, Dave employs an e-coaching system that offers suggestions for various activities. Suppose that one day Dave is one nudge away from doing something active, but that he only considers doing indoor activities because it is chilly outside. If the system suggests to go for a run in the park, Dave will dismiss this option and the system will have failed to give the nudge that Dave wanted and needed to get going.

It is important to note that in both examples, the suggestions made by the system were made in the interest of the users, in line with their overall goals. Moreover, the suggestions may rationally and from a third-person perspective have been among the best possibilities available for the user at the time—the best possible investment and the best activity for burning calories, respectively. Still, each system was not as effective as it could have been.

Earlier we said that options are always action possibilities as seen from the perspective of the user. This means that a support system cannot generate options: It can only generate action possibilities that the user might or might not adopt as options.Footnote 9 To begin answering the questions put above, we therefore think a good starting point would be the following: Effective e-coaches suggest action representations that would contribute to the fulfillment of the goal set by the user, in such a way that it is most likely that these representations will become viable options for the user.

From the point of view of a support system that has to generate and make suggestions for action for its user, four categories of action representations can be distinguished. First, there are action representations that the actor probably would come up with himself, regardless of being supported, and which he evaluates at least somewhat positively (which would make them options for the user). An empirical question here is what kind of effect making suggestions of this category has: Either such suggestions are redundant, or they help make a particular option more salient. An example of suggestions being redundant is a system that suggests to go for a run, when one has already put one’s running shoes on. An example of the second type is where one considers either going for a run (in accordance with one’s goal to stay healthy) or watching television, and is leaning toward the second option. When one receives a suggestion to go for a run, this nudge might just help make the option more salient, leading that person to the decision to go running.

Secondly, a system could offer action representations that the user would generate himself, but which he would not endorse on any level. The prospective effectiveness of this method is not very high: As long as the user does not evaluate an action representation positively, he will not consider that action as an option. The same applies to a third possible method: A system offering action representations that the user would not generate himself and would not endorse if suggested. Suggestions of this kind are likely to lead to frustration with the system, because they try to steer people toward actions they do not want to perform. However, to see that such suggestions might also elicit a positive effect, consider the following scenario. Two parents tell a child every night to go to bed at ten o’clock, whereas this is certainly not a positively valued option for the child. On a particular evening while being alone, around ten o’clock, the child thinks ‘my parents think that I should go to bed now’—and more or less to the child’s own surprise, the child decides to go to bed. In analyzing this case, notice that there is no actual suggestion by the parents on that specific night: The child is unsupervised but has acted on the suggestion that has been offered to him or her so many times. A critical question is whether the generated action representation (i.e., going to bed) really classifies as an option, but the fact that the child acts upon it suggests that it is. So, prima facie, it seems that in certain situations, it is possible that mere exposure to a certain suggestion after some time can lead to a positive evaluation of the suggestion. This kind of behavior would be in line with research on the ‘familiarity effect’ (e.g., Zajonc 1968) and would also be another indicator that the iterative nature of the interactions can play a substantial role in bringing about effective behavior change (see Sect. 4).

Finally, a system could offer action representations that the user would not generate, but would endorse once presented to him. These are the ones that Smaldino and Richerson favor (see Sect. 5), where the result of the coaching intervention is that the actor now has a wider range of options than before. But how can a system determine which action possibilities a user will find compelling? Regarding this question, philosophical analyses could provide a fruitful contribution. To give an example, Illies and Meijers have developed an attractive framework for thinking about the attractiveness of possible actions that revolves around the notion of ‘Action Schemes’. An Action Scheme, then, ‘is defined as the set of possible actions with different attractiveness that is available to an agent or group of agents in a given situation’ (Illies and Meijers 2009, p. 427). Illies and Meijers make two important contributions. First, they acknowledge that the attractiveness of a specific action is the result of a myriad of factors. They write: ‘it is influenced by the degree to which in a certain context the action corresponds to the desires, inclinations, or talents of an agent, with his previous history, his convictions, ideas, intuitions, and character’ (Illies and Meijers 2009, p. 427). Secondly, they recognize that technological artefacts—such as e-coaching systems—can influence such Action Schemes, directly and indirectly, ‘by modifying the set of possible actions available to her, including their attractiveness’ (Illies and Meijers 2009, p. 427). Their point is aimed at a much more general notion of technological artifacts than we are concerned with in this paper. However, given that e-coaching systems are a species of technological artifacts, we can subscribe to their observation that ‘[a]rtefacts do affect human actions, obviously, but we cannot fully understand their profound effects so long as we ignore their influence on the set of actions available to an agent in a given situation, where each option is presented in a certain attractive light’ (Illies and Meijers 2009, p. 434).

So, summarizing, e-coaching systems cannot generate and suggest options, only action representations that fall into one of the four categories. Of all possible suggestions, then, some will be more effective than others or will be perceived more positively in terms of subjective experience.

Consider again the question whether an e-coaching system should make suggestions the user would never have thought of on his own. Smaldino and Richerson suggest that ‘advice is often most useful when it proposes options that were not previously considered’ (Smaldino and Richerson 2012, p. 7). They support this idea by citing work by Page (2007), who has shown that ‘groups are often best able to solve difficult problems when the constituent individuals are from diverse backgrounds, which increases the number and breadth of available options’ (Smaldino and Richerson 2012, p. 7). Offering these types of suggestion may indeed turn out to be the most effective strategy. However, it is important to note that with human involvement, other considerations besides effectiveness should also be taken seriously. The next section will elaborate on ethical considerations about autonomy and responsibility that play a role in the relationship between human and e-coaching system.

6 Designing e-coaching systems that respect autonomy

E-coaching technologies touch upon a variety of ethical concerns, such as privacy considerations (e.g., What data are collected? How, where and for how long is it stored? Who has access to it?), equal access and justice (e.g., How can it be ensured that the technology not only benefits those who can afford expensive equipment?) and responsibility (e.g., Who is responsible for an action that was suggested by an e-coach?). In addition, e-coaching technology raises ethical questions about a person’s autonomy, because being influenced in one’s decision making seems to be in conflict with the classical understanding of self-directedness. Because option generation is most prominently connected to the ethical discussion about autonomy, for this paper we limit ourselves to this topic.

Autonomy is a central (but complex) aspect of human agency that we understand as having the freedom, the capacity and the authority to choose one’s own paths in life in accordance with one’s goals, values and preferences. To begin with, it is important to distinguish between the ideal of autonomy and the notion of perceived autonomy. The former is the object of ethical theorizing; the latter is a psychological measure of how free people perceive themselves to be. These concepts are strongly related, but not identical: It is conceivable that a governmental decision limits human autonomy, without anyone perceiving it as such. Vice versa is it possible that people perceive a decision as autonomy-limiting, while ethically it is not. In this section, we discuss both kinds of autonomy and argue that both sides can have practical consequences for the design of e-coaching systems.

Friedman has extensively argued for the inclusion of human values such as autonomy into the design of software and agent systems (Friedman and Kahn 1992; Friedman 1996; Friedman 1997; Friedman and Nissenbaum 1997; Friedman et al. 2006). Her account of ‘user autonomy’ can be considered a type of perceived autonomy (for example, only if a user experiences difficulties with system complexity will it affect autonomy), although she derives the value of human autonomy from ethical theory. In a recent article, Kamphorst has also argued for the importance of human autonomy in system design, basing his argument on the value that many societies place upon autonomy by subscribing to the Universal Declaration of Human Rights (Kamphorst 2012). In addition, he makes the case that human autonomy is especially important when designing behavior-influencing e-coaching systems because those often have both the capacity and the opportunity to impede people’s autonomy.

In relation to option generation, there are many questions that are of yet unexplored. For instance, if a user continuously follows suggestions that the user would never have thought of on his or her own, how will this affect his or her autonomy? In questions like this one, the previously made distinction comes into focus. On the one hand, the question can be conceived as an empirical one concerning a user’s feelings of autonomy; whether unthought-of suggestions influence how free people perceive themselves to be. This question is open for empirical investigation. It is an important issue too, because there is empirical evidence suggesting that systems that limit people’s perceived autonomy may over time turn out to be less effective, because a diminished sense of autonomy can negatively affect people’s well-being (Ryan and Deci 2000; Reis et al. 2000). In a similar vein, empirical questions can be asked about the effect of such suggestions on self-efficacy as well.

On the other hand, the question may be considered a theoretical one about normative accounts of autonomy. As discussed before, many such systems directly influence people’s intention formation process, and intention formation is generally seen as a central aspect of being autonomous. Christman and Anderson for example state that the core idea of autonomy is ‘being one’s own person, to be directed by considerations, desires, conditions and characteristics that are not simply imposed externally upon one, but are part of what can somehow be considered one’s authentic self” (Christman and Anderson 2005, p. 3). In this respect, Schechtman (2004) distinguishes between different theoretical views on what it means to be such an ‘authentic self’. According to one view, it means being guided by desires and ideas one explicitly endorses. In this position, it makes no significant difference for autonomy whether the suggestions one follows were made by others or by oneself; what is important is that one acts on ideas that one agrees with and embraces [for more on this notion of endorsement, see the accounts of autonomy by Ricoeur (1966); Frankfurt (1971); Dworkin (1988)]. However, on a competing view on what an authentic self is, acting authentically is acting on one’s own robust inclinations. On such a view, it certainly makes a difference for autonomy whether someone acts on his own impulses or on suggestions offered by others, regardless of whether one considers those suggestions to be good ones or not. This raises the conceptual question under which conditions ‘external support’ threatens autonomy—after all it is highly implausible to suppose that autonomy necessarily precludes all external factors from playing a role in decision making.

In this paper, we do not wish to take stance on either side. Our point here is rather that there is an account to give about the role of these external influences. Therefore, developers should take into account the growing body of empirical knowledge on option generation processes and the factors that strengthen or suppress those processes. Such knowledge can have important normative implications. For example, it might turn out that being offered suggestions for action suppresses the agent’s internal option generation processes (this is a point so far unstudied, but consistent with research on self-determination and motivation, see Ryan and Deci 2000). Such a finding would imply that e-coaching makes the agent more dependent on external support. If instead of strengthening a person’s own decision-making process, e-coaching would replace it, this would certainly be undermining the person’s autonomy.

With regard to e-coaching systems, this means that it is important to be able to explain how e-coaching can happen without impeding people’s autonomy. And, crucially, should such findings present moral reasons to avoid certain types of suggestions (recall the four categories from Sect. 5), then this ought to have practical implications for the design of e-coaching systems. Moreover, in extension of theoretical issues of autonomy are issues of responsibility, because autonomy is generally viewed as a prerequisite for ascribing responsibility (Anderson 2013). If we were to hold that someone who is being coached is not acting autonomously, how can we consider that person accountable for actions that follow directly from being coached? These are all important matters that deserve serious attention.

Unfortunately, Torning and Oinas-Kukkonen have shown that in general ethical considerations about persuasive systems have remained largely unaddressed (Torning and Oinas-Kukkonen 2009). Not only is this a surprising result, it is also slightly worrying. From a practical point of view, it is obvious that there is ample room for improvement for many people in their decision making, and employing support structures such as e-coaching systems may make sense in some settings. We do not debate this. But because autonomy is a central moral value in our society, any system that limits a person’s autonomy deserves ethical scrutiny: not every support strategy should be considered equally permissible. Developing solid practices to measure the effects of e-coaching systems on autonomy will take considerable collaborative effort by theorists and empirical researchers. Our goal has been to define the area of research and to raise the important issues.

7 Practical implications and future work

We have discussed a great diversity of material to support our argument that human option generation is an important area of study for designers and engineers of e-coaching systems. Throughout the article, we have pointed toward several areas for further scientific exploration, both empirical and conceptual. In this final section, we will conclude with a more focused research agenda for improving the understanding of the interplay between suggestions and options.

To begin, we see three major empirical challenges. The first is to map out the effects that different types of (computer-delivered) suggestions have on people’s option generation processes. Such studies will require collaboration between psychologists, cognitive scientists and system engineers. Results from studies along these lines will provide insights into the questions discussed in Sect. 5.

The second is to develop computerized methods to accurately predict the options that people will consider. Here, scientists working on this problem can benefit from the expert knowledge that psychologists and cognitive scientists have of options and human option generation. In Sect. 3, we suggested that option generation can be more or less constrained, depending on the stage of the decision-making process. This implies that e-coaching systems will also have to be able to reason about how earlier decisions can affect the options that the user will consider at a later time. Only when such prediction and reasoning mechanisms about options exist, will e-coaching systems be able to take full advantage of the knowledge gained from the empirical work on how suggestions affect options.

The third empirical challenge is to test whether and to what extent certain types of suggestions affect how autonomous people perceive themselves to be. In Sect. 6, we explained that ethical considerations such as being respectful of people’s autonomy should be taken into account when designing e-coaching systems. Taking ethical considerations seriously also means including them in empirical studies. For example, it would be insightful to determine whether people perceive systems as more autonomy-respectful if the system requires active participation of a user in the option generation process. Simply offering suggestions A, B and C might diminish perceived autonomy, whereas the possibility of adding options of their own might strengthen their perceived autonomy. Studying questions such as these is practically feasible and a good way to learn more about how different suggestion tactics can have different effects on people’s perceived autonomy. As mentioned before, the ideal of autonomy does not necessarily correspond to people’s perceived autonomy. Nevertheless, results from such studies will feed directly into theoretical work about autonomy and, later, ethical assessment.

On the conceptual side, there is a major theoretical challenge to provide a convincing account about whether and under what conditions external support can threaten autonomy (see Sect. 6). A solid theoretical framework will help to make sense of empirical results. Moreover, should such an account present moral reasons why certain types of suggestions are undesirable, then this will have practical implications for the design of e-coaching systems.

As things presently stand, it is too early to provide any definite ethical guidelines for developing e-coaching systems. However, when gaps in our understanding about the interplay between suggestions and options are (at least partially) bridged, the state-of-the-art theories and empirical findings can be assessed to provide such guidelines.

To conclude, our goal has been to raise awareness for important issues regarding autonomous e-coaching systems and the interplay between suggestions and options. What is evident from the research agenda is that to achieve progress in this area, disciplinary boundaries will have to be crossed. It is our hope that through interdisciplinary collaboration in this field, developers and engineers of e-coaching systems can improve their systems with regard to both effectiveness and autonomy considerations.