In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust
. MIT Press (forthcoming
The current paradigm of Artiﬁcial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the ﬁeld’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, recent cases of unexpected eﬀects of AI are the consequences of those very choices that enabled the ﬁeld to succeed, and this is why it will be diﬃcult to solve them. In this chapter we review three of these choices, investigating their connection to some of today’s challenges in AI, including those relative to bias, value alignment, privacy and explainability. We introduce the notion of “ethical debt” to describe the necessity to undertake expensive rework in the future in order to address ethical problems created by a technical system.