Abstract
Detroit Become Human (DBH) offers a stunningly visual gameplay experience that both tells a philosophical story and stimulates the moral reasoning process in players. The game features a futuristic world where highly intelligent androids are bought and sold as workers who take on menial labor tasks for humans. In this chapter, we explore three dimensions of moral reasoning: accounts of moral agency, ethical theories or frameworks, and accounts of moral patiency. We then explore how DBH addresses all of these philosophical issues in its narrative and gameplay scenarios. Issues of moral agency are explored through some of the androids gaining consciousness and autonomy. Ethical theory is explored through various ethical dilemmas that emerge in the gameplay. Moral patiency is explored by questioning if the androids, conscious and unconscious, are worthy of moral concern and why. We then show how the gameplay structure offers a unique interactive opportunity for players to engage in the moral reasoning process. Additionally, through the questions it raises and scenarios it poses, DBH makes an implicit anti-speciesist argument regarding moral patiency. Additionally, it makes a secondary argument that suffering and struggle are necessary to develop the possibility of second-order desires and true freedom and agency. The interactive structure of the game, in which players’ choices have more weight than in many gameplay experiences, makes DBH a unique work of contemporary philosophical pop culture.