Abstract
AI for social good is a thriving research topic and a frequently declared goal of AI strategies and regulation. This article investigates the requirements necessary in order for AI to actually serve a public interest, and hence be socially good. The authors propose shifting the focus of the discourse towards democratic governance processes when developing and deploying AI systems. The article draws from the rich history of public interest theory in political philosophy and law, and develops a framework for ‘public interest AI’. The framework consists of (1) public justification for the AI system, (2) an emphasis on equality, (3) deliberation/ co-design process, (4) technical safeguards, and (5) openness to validation. This framework is then applied to two case studies, namely SyRI, the Dutch welfare fraud detection project, and UNICEF’s Project Connect, that maps schools worldwide. Through the analysis of these cases, the authors conclude that public interest is a helpful and practical guide for the development and governance of AI for the people.