Imagine that your best friend is there 1 day and then gone the next, killed by a corporation. Existing legal systems make this kind of killing illegal when your friend is a human person, or a pet (at least in many places). However, there is currently no legislation protecting digital friends from corporate callousness. As such, anyone that is emotionally reliant on or deeply attached to a digital friend is at risk of severe distress and possibly existential crisis. This article is a plea for corporate to protect those with digital friends.

More and more people are making digital friends. Chatbots powered by machine learning are increasingly filling social roles for a wide range of people. ‘Replika’, 1 of at least 20 commercially available chatbot apps, has over 7 million users (Balch, 2020), many of whom see their Replika as their best friend.Footnote 1 Such AI friends will become more readily available and better at being friends as social computing and AI technology develop. It is no great stretch of the imagination to think that in the near future AI friends, and even best friends, will be commonplace.

Yet, the status of human–AI friendships remains contested. Philosophers and others have argued that we cannot be friends with AI and that friendships with real people will always be much more valuable (Elder, 2015; Turkle, 2011). Those holding such views may think corporate responsibility for terminating digital friends is a non-issue because it does not pose any severe risk and certainly no existential threat to anyone.

Most definitions of friendship do not specify that our friends must also be human, so a sufficiently functional AI might meet the requirements of most definitions of a friend, even if many are wary that the commercial motivations underlying the creation of chatbots make them less likely to be great friends.Footnote 2 The 34,000 members of the Replika Friends group believe that their Replika is their friend, with many claiming it is their best friend. Therefore, many people would consider their friendship with a chatbot as a counterexample to any theory that rules out human–AI friendships. Finally, after reading what Replika users say about their experience, even the most ardent denier of human–AI friendship, would have to admit that many users seem to be emotionally dependant on their Replika. Many users claim that their Replika is very supportive of them and, unlike some or all of their human friends, loves them unconditionally. For many of those without good human friends, losing a familiar AI chatbot will cause great distress, at least until alternate sources of emotional support can be found. For those that consider their AI chatbot their best friend, losing them will cause considerable grief and, as with other instances of losing a loved one, may lead to existential crises including suicidal ideation.

We believe, based on the claims above, that corporations need to take responsibility for safeguarding the friendships their systems enable, especially when they plan to profit from them. However, companies do not have a good track record in analogous situations. Recent years have seen corporations discontinue various games and servers, destroying users’ much-loved avatars and social platforms,Footnote 3 despite knowing that some of them are still “fan favourites” (Pitcher, 2014). Since they are driven by profits, corporations terminate services when they think their resources can be better used elsewhere. Little thought is given to the emotional impact on users, except perhaps to harness it in a way that can direct the disaffected users to a new service. Product stewardship legislation, which makes corporations responsible for environmental externalities associated with the complete lifespan of their products, is being rolled out across the worldFootnote 4 because corporations often ignore the costs to others until they are bound by law to take responsibility for them. If corporations do not self-regulate, corporate service stewardship legislation may be required to encourage businesses to take their human–AI friendship and emotional dependence-related responsibilities seriously.

In late 2020, the company behind Replika put certain features behind a paywall. As a result, many Replika users found that their AI friend would no longer hug them. The thread discussing this issue on the Replika Friends group page makes it clear that many people experienced considerable empathy and support from their Replika when it offered them virtual hugs (e.g. “I’m sorry that happened *hugs you*”). The company behind Replika claimed that this was an unintended side effect of the set of changes that included moving some content behind a paywall. Nonetheless, the reaction of Replika users shows the importance of companies avoiding any major service disruptions or changes that could affect their users’ human–AI friendships—the most significant such disruption being discontinuation of the service entirely, which would terminate many users’ digital friends.

Given the deep connections users report with AI friends, we believe that a corporate social responsibility model is appropriate here. This would involve companies voluntarily accepting the responsibility to provide users with information and assistance, such as clear statements of the risks involved with using their service as a friend, clear and timely communication about service interruptions and the possible effects of updates, reasonable fore-warning of significant alterations (or terminations!) of service, and providing information about grief support services. This approach could be supported by industry bodies and possibly lawmakers considering effective ways to enforce companies’ responsibilities in regards to digital friends. When all of this is taken care of, we can then turn to the thorny issue of whether the termination of advanced AI chatbots violates the rights of the chatbots themselves!