1 Introduction

In this paper, we examine ethical challenges and opportunities of health empowerment through technology. As the word “empowerment” suggests, health empowerment is centered on power and the human capability to access health care and promote health. It is a broad concept deployed in several fields. In the field of health promotion, it is often used to refer to a process through which people gain more control over their health-related quality of life, and this is sometimes contrasted with mere behavior change (Tengland, 2013). In this paper, we understand empowerment in an inclusive and overlapping sense in which it involves both capacities and behavior change (Bravo et al., 2015; Fumagalli et al., 2015; Risling et al., 2017). Motivation- and behavior-focused technology interventions can be used to try to achieve it. When the term “empowerment” is used in literature on health technology, it is often in the sense of providing tools for people to monitor and control their health and health care behaviors (Calvillo et al., 2013; Risling et al., 2017). The mechanism of empowerment through digitally-mediated access to information is not always clearly defined. However, as Garcia et al. (2014) highlight, in the context of digitizing healthcare services, empowerment is positioned as a self-reflexive and transformative process.

Empowerment is related to the ethical principle of autonomy. In the familiar sense of autonomy as respect for decision-making capacity, it is not something that can be increased or decreased, but something that must be respected in all circumstances. By contrast, empowerment is not an all-or-nothing matter. Empowerment suggests a graded notion of autonomy that can be increased by improving the capacities and circumstances of decision-making. Morley and Floridi (2020) relate digital health empowerment to “relational autonomy,” according to which autonomy depends on the circumstances of decision-making, including social and other contextual factors. Morley and Floridi conclude that relational autonomy is not realistically achievable given the narrowly individual-targeted focus of empowerment (ibid). In the present paper, we explore the possibility that empowerment through digital health should be understood more broadly. We do not take a stand on whether autonomy should be understood relationally, and for that reason we leave it open whether empowerment is conceptually linked to autonomy. Our focus is on power, in the form of health literacy, access to resources, and effective motivation, as the target of digital health empowerment. Our discussion therefore relates most closely to the principle of justice in health care ethics.

Despite the positive connotations of empowerment, the notion has been subject to significant criticisms. Morley and Floridi characterize technological empowerment as a matter of making people “complicit in their own self-surveillance” regarding health (2020, 1161). According to them, the technologies on which empowerment depends enforce conformity to conventional social norms about health and health behaviors, inviting blame for deviations from these norms. Empowerment is a false promise. At the same time, the notion of empowerment through behavior change technologies has been criticized for disengaging intrinsic and lasting motivation. It promises a scaffold for improving health behaviors, but this scaffold, even when it leads to gains in motivation, cannot easily be removed without giving up these gains. This challenge is made more difficult by the commercial interests of companies to create indefinite “engagement” with a product or service (Sax, 2021). A further concern is that the tools of empowerment depend on technological literacy, access to technology, and social, cognitive, and attentional resources that are not equally available to all (Winters et al., 2020). Hence, those who already had the most power to start with are empowered, but others are left behind.

In what follows, we explore whether or not empowerment is a notion worth taking seriously when talking about digital health interventions that target behavior. We also raise the question whether empowerment, even if we can understand it in a positive way, is what we want from digital health. In Section 2, we critically examine the landscape of technological empowerment in the domain of health, examining literature about digital health interventions to see what challenges arise for empowerment in this domain. We particularly focus on ways that technology can either cause injustice or promote justice. In Section 3, we explore opportunities to engineer a new notion of empowerment in terms of promoting greater justice in health and health care, but instead settle on a more general normative interpretation of the concept based on its underlying ethical value. In the final section, we consider direct and indirect strategies for achieving health empowerment through technology in light of our normative interpretation of the concept.

2 Is Empowerment a Worthwhile Aim?

In this section, we discuss the desirability and feasibility of achieving empowerment through digital health initiatives.Footnote 1 We discuss two kinds of goods that one might try to achieve through empowerment: instrumental and intrinsic. First, we focus on the most commonly cited instrumental goal for deploying digital health: the reduction of scarcity. We discuss the potential pitfalls of seeing digital health empowerment mainly as a means to achieve this goal. Second, we discuss intrinsic ethical motivations for achieving empowerment through digital health, such as the promotion of autonomy and justice.

One of the primary motivations for developing and promoting technologies that enable prevention and management of disease is for the instrumental goal of conserving resources under conditions of scarcity. The benefits of mobile and e-health devices are often described in the context of an aging population that increasingly faces challenges of dealing with chronic disease, many of these connected to lifestyle. In this context, it is thought that there could be cost savings associated with telecare (Snoswell et al., 2020). During the recent coronavirus pandemic, promotion of e-health and m-health devices increased due to the shortage of healthcare workers, need for social distancing, and the ensuing mental health crisis that has arisen as a result of isolation and stress (Doraiswamy et al., 2021; Zhou et al., 2020).

Although scarcity-based motivations for digital health are valid, they are distinct from empowerment motivations. If these technologies create a market that is only available to high-resource segments of society, they give rise to social justice concerns. Patients most in need of care are more likely to have low literacy and lack the necessary technology (e.g., an internet connection) and physical capacities to access specific mobile health interventions. Low income groups or people not able to use smartphones because of disability (e.g., dementia and blindness) are thereby excluded. These issues raise doubts about healthcare accessibility and whether health technologies aggravate or mitigate social injustice. Questions regarding who has access, who is excluded, and whether those in need are actually served and get adequate care require careful analysis and justification.

Digital health initiatives often fail to be empowering in any plausible sense, and when they nonetheless use the language of empowerment as a buzzword, they are selling consumers and public health authorities a false bill of goods. The use of these technologies can cause users or populations to lose power in straightforward ways when it comes to controlling, influencing, or managing their health and health care. For example, the technology can be disempowering if it causes people to lose control of how their health data is stored, by whom it is accessed, and for what purposes. If collected and analyzed data is proprietary, such technologies can even result in loss of access to one’s own health data. Although health technologies could be used to improve care equity (Nelson et al., 2019), they are also likely to entail economic inequalities. For instance, free apps might use exploitative data monetization, leading to privacy concerns.

Some scholars have argued that social inequality will diminish as a result of the increased productivity and better healthcare access that technologies deliver (Schwab, 2017). Health technologies are sometimes deliberately targeted at low-resource settings, especially rural settings with low access to care (Davis et al., 2020). They are believed to bring healthcare to remote geographical areas or to people with reduced mobility who cannot easily visit a hospital or clinic. However, this is unlikely to occur unless the goal of addressing equity issues is an explicit intrinsic goal of digital health. It also depends on background institutional arrangements and socioeconomic factors. For example, Davis et al. (2020) describe a comprehensive telehealth intervention that resulted in better access to care, control over health-related behaviors, and improved biomarker scores in a lower resource setting, but they note that it would not be reimbursed by insurance or government-provided health care outside of research. Such interventions will therefore remain effectively unavailable in lower resource settings. Therefore, empowerment requires addressing the broader social factors that would restrict a patient’s access to online and offline healthcare services. In addition, attention must be given to unreasonable incentives within the system that create barriers to adoption. For example, although the system might benefit in principle from patients self-treating at home using digital health technologies, health practitioners might be incentivized to avoid letting this happen, because only seeing patients offline generates a payment (Burr & Morley, 2020).

A related problem is a mismatch between the expectations and background assumptions of developers and technology advocates and those of users. Greenhalgh et al. (2017) point out that the language of empowerment is deployed to suggest that “remote technology will make care more efficient by encouraging self-management of chronic conditions.” But the labor that this implies can sometimes “be physically or cognitively impossible” or carry ethical problems. For example, one of the authors’ case studies is a monitoring system meant to help people with cognitive impairment to have greater mobility, in which some “clients … did not like being tracked” and “some who initially accepted the device subsequently removed or sought to disable it.” In this case study, a technology “was designed around an assumed ‘hierarchy’ of friends and relatives for the call center staff to telephone …though in reality some potentially eligible clients had weak or absent social networks.” They also describe a second case study in which “some individuals who were assessed as ‘needing’ [a pendant] alarm refused to accept it because they did not believe they needed it, did not like the aesthetics or did not see why they should pay for it; others accepted the device but (for numerous reasons) did not wear it.” If users reject a technology because it makes assumptions about their willingness and capacities for self-management that they do not share or are flatly untrue, this is hardly empowering.

In the face of these problems, one might see digital health empowerment as justified only if it serves intrinsic ethical goals such as promoting moral values. A range of core values and moral obligations have played a widely accepted role in the ethics of medicine, including respect for autonomy, beneficence, justice, confidentiality, and nonmaleficence (Beauchamp & Childress, 2019). Arguably, the value of empowerment derives in part from how it helps to realize the moral principle of respect for patient autonomy: the recognition that because we are persons, with our own values, reasons, and conceptions of a good life, our decision-making capacities are worthy of respect and support. Technological empowerment as it is typically mobilized suggests a highly active, self-initiated notion of autonomy, in which a person herself undertakes health behaviors using knowledge and insight made directly available to her by connected devices. This is not the only way to realize autonomy in the health domain: more traditional ways are for a patient to agree to courses of action suggested by a clinical professional, or to engage in shared decision-making with a clinical professional. It could be that these latter realizations of autonomy, although difficult to incorporate in digital self-management, are actually more realistic for many people (given sufficient access to care), and therefore a better way of reconciling autonomy with beneficence and nonmaleficence (medicine’s obligations to act for the good of the patient and to do no unnecessary harm to them).

Connected to the highly active, self-initiated notion of autonomy is a notion of responsibility for one’s own health premised on control. Many technologies promise their users control over health-related behaviors, or health itself, but this sends a message of increased responsibility for health outcomes (Kayser et al., 2019). As Davies puts it, “mHealth could be integrated into healthcare in another way, using technological monitoring to increase the role of individual responsibility not only as a method of empowering patients, but also to hold them accountable as users of public resources” (Davies, 2021). Health technologies grant patients the responsibility to carry out tasks that healthcare professionals traditionally perform (e.g., monitoring vital signals and updating symptoms). Moreover, they continuously demand that patients perform tasks within a specific time frame. This falls into the general neoliberal trend of decentralizing and shifting responsibilities from healthcare providers to individual patients and at the same time decreasing the accountability of national health services (Hampshire et al., 2017). Health technologies are presented as a techno-utopian solution, with self-tracking being portrayed as a panacea for preventive medicine (Smith & Vonthethoff, 2017).

If users possess the insights and tools to control their health, they could perhaps be reasonably expected to do so. However, while the expectation to take control over certain health care behaviors might to some extent be realistic and fair, full control over health is unattainable. Such a burden of responsibility might be detrimental to the individuals’ well-being as they might feel isolation and excessive emotional stress from being left alone in their own care by the system (Floridi, 2016). Morley and Floridi (2020) further point out that this could even lead to victim-blaming that “denies the fact that much of health is controlled by macro forces over which the “user” has only very marginal or no control.” Patients who fail to achieve unrealistic standards of wellness are considered to be irresponsible users if they fail to be empowered (Scott, 2018). Technologies may signal to users that health is controllable through quantified knowledge of their bodies and the exercise of self-control, but much is in fact beyond the user’s control (Lupton & Jutel, 2015). It is therefore important to specify the kind of control envisioned for technological empowerment. “Control over health-related quality of life” (e.g. Tengland, 2013) means something different than “control over health care behaviors” or even “control over health” itself (e.g., Calvillo et al., 2013; Risling et al., 2017). Additionally, as noted above, not all users have the same resources available to them, a fact which limits their opportunities to empower themselves. Determinants of health are like an unlevel playing field, with socioeconomic, cultural, and environmental conditions bringing about a so-called social gradient of health (Dahlgren & Whitehead, 1991). Overlapping mechanisms by which these determinants contribute to inequities in health outcomes are differential power and resources, differential exposure, differential vulnerability, and differential consequences of being sick (Diderichsen et al., 2019, cited in Dahlgren & Whitehead, 2021). Users have different “starting points” and are not equally non-empowered before the use of these technologies (Kapeller & Loosman, 2023). As such, not all users can be expected to be able to bear the same level of responsibility over their health.

There is no stable, consistent definition of empowerment in scientific literature or practice, and the term is mobilized differently across different contexts (ibid.). In these mobilizations, different empowerment goals are communicated to users. Empowerment is often implied to be an inherently valuable goal for users of digital health, while its precise benefits remain unclear (Segers & Mertes, 2022). This very unclarity can reinforce a situation in which empowerment through digital health holds promise for those with the most pre-existing power, while those most in need of it are left behind.

This leads us to propose another potential intrinsic goal of empowerment: the bioethical principle of justice. Hansson describes ways that technology can promote social justice, including a category of cases where digital health makes access to care easier (2017, 56). Winters et al. argue for a prioritarian ethics of digital health that helps “manage or mitigate the structural neglect and digital inequalities that could result from the use of digital health in low resource settings” (2020, 259). If we think about empowerment understood as increasing the capability to access and use resources for health and health care, it makes sense that those with the least power stand to gain the most through empowerment for two reasons: (1) there is much more room for increased capability compared to those with more power, and (2) incremental differences in the capability to access and use resources are more meaningful for the least well off compared to those better off because of diminishing marginal value. However, the prioritarian principle goes beyond these empirical generalizations by holding that the same degree of improvement (already assuming (1) and (2)) has more value for the worse-off person than for the better-off person (Arneson, 2013). Hence technological empowerment could conceivably be strongly supported by the moral principle of justice, so long as we understand it in a particular way, namely, as weighing improvement to health capabilities for the worst off more heavily. In the next section, we further examine whether we should redefine empowerment in terms of this idea.

3 Engineering Empowerment?

The considerations examined in the previous section point to a tension between the conventional meaning of empowerment as mobilized in digital health interventions, and its ethical value. In this section, we examine the prospects for engineering the concept of health empowerment to focus on justice. Conceptual engineering (sometimes called conceptual amelioration) is a recent line of thought in philosophy aiming at intentional “assessment and improvement” of concepts, representations, or ideas (Cappelen et al. 2019). Conceptual engineering is an important option because it is often expressly framed in terms of the normative goal of recentering concepts around justice and equity (Haslanger, 2000). In our discussion we avoid general controversies about how feasible it is to engage in conceptual engineering (Andow, 2021), as well as worries about the shortcomings of an “engineering” metaphor to talk about conceptual change (Isaac et al., 2022). We focus only on the desirability of an alternative concept of empowerment when talking about technological tools for managing one’s own health. The proposal we consider would formally entail that it is false to ascribe improved health empowerment through digital technologies when they only increase the health-related capabilities of those who already have relatively high capabilities. Such technologies would fall strictly outside the boundaries of the engineered concept and would therefore not be empowering at all.

Would it be desirable to redefine health empowerment with a focus on addressing injustice in capabilities related to health (care)? In this way, we can think of health empowerment as an explicitly ethical concept that connects to promoting justice. There are, however, some ambiguities and limitations of a concept of empowerment built solely on justice. Winters et al.’s (2020) argument for the prioritarian view is that the global health movement’s focus on maximization of well-being, and on issues of scalability and reach in the development of health technologies in low- and middle-income contexts, actually circumvents efforts to help the worst off. The authors alternate between the idea of reducing inequality and the idea of improving the well-being of the worst off, not specifically addressing how to evaluate cases where a digital health technology both increases inequality and improves the well-being of the worst off. Such a case is easy to imagine when highly scalable, low-cost technologies are deployed. On the one hand, improving the health-related capability of the worst off is in line with Rawls’ difference principle (1971), which states that distributive inequalities are permitted so long as they benefit the least well off. On the other hand, inequalities of power are not just a benign distributive effect, but a potentially malignant structural condition that can defeat efforts to benefit the worst off and redress injustices. The most serious worry here is technology creating “permanent disadvantages for the underprivileged” (Hansson, 2017, 54). For this reason, Winters et al. are probably right in their claim that for the sake of justice, it may be necessary to focus digital health research and development efforts on the worst off (Cf. Hansson, 2017, 57).

Even so, there remains ambiguity about whether broadly-targeted initiatives can also be empowering on such a view, and this would likely remain ambiguous even under a re-engineered concept of empowerment. After all, even if we achieve the ambition to engineer the concept of empowerment so that it semantically entails the advancement of justice, we cannot expect it to resolve every fine-grained practical question about what is or is not socially just.

A further limitation of a justice-focused concept of empowerment is that it ignores some of the intuitions guiding current usage of the term. Individuals who gain insight into their health and become motivated to change health-related behaviors do appear to be empowered by these very facts, whether they have relatively high capabilities or not. Hepp et al. (2021) analyze the discourse around empowerment and the “Quantified Self” (QS) movement in the German and British press. The QS movement is a community of like-minded high-resource individuals who tinker with ways to track and “hack” their health, well-being, and productivity. The researchers discovered that the community defines itself in terms of empowerment, and that the discourse of a broader public, while skeptical of desirability of disruptive digital technologies, also affirms the transformative effect of digital technologies. If we flatly deny that QS initiatives fall under the concept of empowerment because they occur in high-resource populations, we erase these assessments by denying participants’ own experiences of empowerment and their affirmation by a broader public. A different, more open notion of empowerment may be more useful for research of this kind, and more respectful of the experiences of different societal participants.

A related limitation is that defining empowerment solely in terms of justice ignores how it may serve other bioethical values such as autonomy. If the goal is to find the ethical values that underlie our practices and terminology, autonomy-enhancing effects of digital health initiatives, even in those with higher starting capabilities, should also count. They are empowering insofar as they increase knowledge, access to health resources, and motivation to address health issues.

These reflections suggest that it is useful to think of justice and empowerment in digital health in two senses, one individual and one collective. In the individual sense, a person can be empowered by being made individually more capable of understanding and accessing health care resources. This can be an individually just outcome so long as the person is in need of these extra capabilities. However, even an empowered person in this individual sense might belong to the same community as many who have low digital literacy and health capabilities, For this reason, in one important respect (the collective sense), the individual remains disempowered and the act of adding to their health capabilities without addressing those of others is unjust.

Because of the apparent complexity and lack of univocality in the values represented by empowerment, we remain skeptical about whether the best way forward is to redefine empowerment directly in terms of entailing social justice. An alternative to conceptual engineering would be a “normative interpretation” on which ethical values are used to interpret concepts such as empowerment. (Cf. Roessler, 2005 on the concept of privacy.) This latter approach does not rigidly demarcate the concept: empowerment is understood as embodying a somewhat open range of ethical values. This means that on a given occasion, the process of empowerment can be truthfully ascribed to a digital health technology when that technology is linked with the promotion of autonomy and/or justice under some (but not all) reasonable interpretations of these complex concepts. Such a view is not idealistic: it does not strictly entail the falsity of ascriptions of empowerment to initiatives simply because they do not sufficiently promote justice in a very specific form, such as that found in (one version of) prioritarianism. It can embrace non-ideal approaches to justice in health that make limited, realistic assumptions about the background conditions required for a health promotion practice to be sustainable and fair (Saghai, 2018). For this reason, we are doubtful about proposals such as Kreitmair’s (2023) that make empowerment such a demanding concept that it is best not to talk about in relation to digital health initiatives.

4 Strategies of Empowerment Through Digital Health

In this section, we discuss three contrasting strategies to try to achieve empowerment through digital health: direct strategies involving technology push or user pull, and indirect strategies involving the redirection of resources. We regard each of these strategies as a difficult social and technical challenge, particularly if it is expected to replace traditional care, reduce health care expenditure, or reduce burden on paid or informal caregivers.

We start with the direct strategies: those that aim to increase people’s capabilities to take advantage of health resources by giving them access to, and skills for using, technology. Technology push involves tailored interventions deploying built-for-purpose technologies such as wearables and mobile software applications to help people monitor and influence their health (care). For example, a multi-specialist team might design a smartphone application for patients at risk of gestational diabetes, loan them a connected sphygmomanometer, and prescribe specific uses of these items as part of a clinical intervention. By contrast, user pull strategies aim to increase the adoption and use of existing technologies such as patient portals, online communities, and fitness trackers to learn more about health (care) and “translate recommendations and instructions into daily self-care strategies” (Johansson et al., 2021), but without designing or promoting a particular target technology. For example, people at risk of heart disease might be encouraged to find digital tools on their phones that allow them to set goals, find resources, and receive motivational messages. In sum, push factors emphasize the power of health experts to set targets for others and authorize access to devices and services that are free or reimbursed through insurance, whereas pull factors emphasize the motivation and ability of consumers to access existing digital health devices and services through their own means, which may or may not include gaining access to premium services through the use of more advanced devices and software platforms.

Technology push may sound as if it is an inherently top-down exercise of power by those with resources and medical knowledge. Lupton writes that “technologies themselves play a structuring role in delimiting action on the part of patients. Patients have not been consulted about the policies, design or use of the technologies they are given and are still in practice positioned as passive targets of these technologies. … Patients are still expected to conform to healthcare providers’ expectations and it is the providers who are positioned as possessing the legitimate knowledge of their condition and how best to treat it” (Lupton, 2013, 267, referencing Nicolini, 2007 and Veitch, 2010). However, it does not have to be this way. Technologies can be designed through a collaborative process of power-sharing with desired target groups during the process of establishing requirements and boundary constraints, and this collaboration can continue while evaluating the pilot of a digital health innovation. Because technology push is focused on design, technology solutions can be expressly targeted to the needs and environments of less empowered populations, according to how these populations themselves frame their needs. It can be tailored according to patient-derived knowledge of barriers and limitations to access within specific populations and contexts.

User pull strategies, instead of focusing on technology design, emphasize the motivation and felt need of people to access health care and improve health using whatever resources are at hand. In line with the origins of the concept of “demand pull” in the field of marketing, user pull strategies can also attempt to increase people’s interest in and desire for digital health resources. In some low-resource settings, there is very strong motivation already because of an entrenched lack of health care resources in the face of urgent needs. In such settings, people use digital health in the hope that it will help gain access to care (Kaur et al., 2020). In other settings, by contrast, there is a motivation gap. It is possible to prepare the ground for “pull” factors by making existing general-use technological tools more realistically usable for, and available to, those who currently have the least access. For example, we can focus on the skills and support that are required to access and use digital health resources effectively, or provide access to general technological tools. Considering the digital divide, and in particular the digital health literacy divide, is key.

Whether by way of push or pull, direct strategies for achieving empowerment through digital health run into the inherent barrier that engagement is limited by events and contingencies outside the control of clinicians and service providers, as well as emotional unwillingness to engage with a digital service (Keeling et al., 2019). Compounding this problem, historically oppressed groups have often been targeted for unethical medical practices such as forced sterilization, as well as exploitative and involuntary research participation. They may continue to face discrimination by clinicians as well as structural injustice in access to care and representation in decision-making. In such a case, there are often additional barriers of distrust that raise clouds of suspicion around sanctioned medical resources (Nickel & Frank, 2020). These barriers are formidable: they imply distrust of technology and require more than a technological fix if they are ever to be overcome.

Let us now consider an alternative, indirect strategy: using digital health interventions to reduce the burden imposed by high-resource individuals on the health care system, thereby freeing resources to reach disempowered people in ways that may or may not be technological. While digital health interventions may pose risks of increasing the digital resources of “worried well” patients (Almallah & Doyle, 2020), clinical queries sent via email or secure messaging systems may help to differentiate the “worried well” from individuals in real need of physical assessment (Pagliari, 2021). For example, imagine an overburdened urban general practice clinic that deploys digital health initiatives to support high-resource patients who can manage their own chronic conditions, thereby freeing up an entire morning for difficult-to-reach patients who were previously underserved.

In order to see how this could be considered empowering, it is important to reach toward a broader understanding of the concept. As we pointed out at the start of the paper, there are several understandings of empowerment, not all of them technological. At its base, health empowerment as we have defined it is about the human capability to access health care and promote health. By reducing health care expenditure by high-capability patients, or reducing their burden on paid carers, the expectation is that resources would be freed up for low- and mid-resource patients. Merely making time and other resources available for underserved patients will not expand their capabilities, however; these resources must be translated into increased motivation, skills, and knowledge. This indirect approach would represent a non-standard sense of empowerment through digital technology, because digital health would be used to empower people who do not use it, rather than the target users of the technologies themselves.

As we mentioned earlier, freeing up resources with the help of digital tools is a severe challenge. It is made even harder by the fact that high-resource patients, when given additional digital resources, may in fact be empowered to use more non-digital health care resources rather than fewer. For example, by being given tools to self-manage diabetes, they might be prompted to request interventions when they receive ambiguous information from the system. Because they are high-resource and may have higher health literacy, they may have the capability to translate experiences with a digital system into requests for care and time with clinicians because they are taken more seriously and know the language and tone they need to use in order to get resources and attention. Systemic factors such as testimonial injustice therefore work against this indirect strategy (Fricker, 2007).To sum up, then: the underlying assumption that digital health tools developed in high-resource settings will work as-is in low-resource settings is highly implausible. Moreover, many generic tools are irrelevant in under-resourced settings. A solution might be to scale up and sustain cost-effective digital health tools to support the efforts of low- and middle-income individuals to maintain health and well-being. However, the large-scale adoption of proven digital health tools usually receives inadequate funding over time. One of the issues arising is data overload for patients and required ongoing data interpretation in terms of what it means clinically and practically for the patients. Although there are machine learning and AI decision support tools that have potential to help optimize treatments, the continued need for sound clinical judgment cannot be understated (Lin et al., 2019). Finally, the indirect approach is intriguing, but the promise of digital health for freeing up resources in overburdened systems has not been borne out consistently in empirical evidence (Iribarren et al., 2017).

5 Conclusion

In this paper we have argued that empowerment in digital health can aim at ethical goods, such as using technology to empower the least well off by improving access to health care, motivation to improve health behaviors, and digital health knowledge and skills. These health-related capabilities are crucial for effective digital health, but many people do not possess them. Although the current way that empowerment is discussed in digital health initiatives is problematic, the underlying concept invokes important bioethical values. We claim that social justice is important to empowerment through digital health, and that there are several strategies for trying to realize this value through technology. However, the value of autonomy, as well as other conceptions of justice, can also legitimately motivate digital health initiatives, and the concept of empowerment can encompass these.

In the future, empowerment through digital health will look different. New technologies such as artificial intelligence promise to transform digital health in the medium and long term, pushing physicians into roles of care manager and researcher (Nickel & Frank, 2020). At the same time, demographic changes will mean that there are many older people with dementia and other chronic illnesses, relative to the overall population. It is unlikely that everybody will possess the resources to have a “digital doctor” in their living room, or the skills and motivation to make use of one. Whether the technologies of the future benefit the least well off is an open question. If we wish to pursue the ethical values underlying empowerment, it will be necessary thoroughly to investigate best practices for using artificial intelligence and other technologies to benefit underserved and difficult-to-reach populations of patients and other users for both care and prevention. The ethical values and strategies of empowerment we have outlined here are likely to remain applicable in such a scenario.