Abstract
Theories of action tend to require agents to have mental representations. A common trope in discussions of artificial intelligence (AI) is that they do not, and so cannot be agents. Properly understood there may be something to the requirement, but the trope is badly misguided. Here we provide an account of representation for AI that is sufficient to underwrite attributions to these systems of ownership, action, and responsibility. Existing accounts of mental representation tend to be too demanding and unparsimonious. We offer instead a minimalist account of representation that ascribes only those features necessary for explaining action, trimming the “extra” features in existing accounts (e.g., representation as a “mental” phenomenon). Our account makes ‘representation’ whatever it is that, for example, the thermostat is doing with the thermometer. The thermostat is disposed to act as long as the thermometer is outside a given range of parameters.. Our account allows us to offer a new perspective on the ‘responsibility gap’, a problem raised by the actions of sophisticated machines: because nobody has enough control over the machine’s actions to be able to assume responsibility, conventional approaches to responsibility ascription are inappropriate. We argue that there is a distinction between finding responsible and holding responsible and, in order to resolve the responsibility gap, we must first clarify the conceptual terrain on which agent is in fact responsible.