Abstract
Who is responsible for a harm caused by AI, or a machine or system that relies on artificial intelligence? Given that current AI is neither conscious nor sentient, it’s unclear that AI itself is responsible for it. But given that AI acts independently of its developer or user, it’s also unclear that the developer or user is responsible for the harm. This gives rise to the so-called responsibility gap: cases where AI causes a harm, but no one is responsible for it. Two central questions in the literature are whether responsibility gaps exist, and if yes, whether they’re morally problematic in a way that counts against developing or using AI. While some authors argue that responsibility gaps exist, and they’re morally problematic, some argue that they don’t exist. In this paper, I defend a novel position. First, I argue that current AI doesn’t generate a new kind of concern about responsibility that the older technologies don’t. Then, I argue that responsibility gaps exist but they’re unproblematic. (NOTE: Email me for a copy.)