Abstract
Artificial agents create significant moral opportunities and challenges. Over the last two decades, discourse has largely focused on the concept of a ‘responsibility gap.’ We argue that this concept is incoherent, misguided, and diverts attention from the core issue of ‘control gaps.’ Control gaps arise when there is a discrepancy between the causal control an agent exercises and the moral control it should possess or emulate. Such gaps present moral risks, often leading to harm or ethical violations. We propose a second-order ‘duty of moral control’ that mandates closing these gaps to reduce risks within acceptable moral limits. Our analysis encompasses both autonomous machines and collective agents, acknowledging their similarities and key differences in constitution and moral status. We suggest four methods to close control gaps: ensuring artificial agents attain moral agency, providing meaningful human control, implementing safety engineering, and employing social control. These methods aim to responsibly integrate artificial agents into society. We conclude that a realistic approach, which addresses the practical problems posed by control gaps, is essential. This approach provides solutions to manage the risks posed by artificial agents while maintaining acceptable moral standards, ensuring we responsibly harness their potential and address the ethical challenges they present.