Abstract
AI tutors are promised to expand access to personalized learning, improving student achievement and addressing disparities in resources available to students across socioeconomic contexts. The rapid development and introduction of AI tutors raises fundamental questions of epistemic trust in education. What criteria should guide students' critical assessments of the epistemic trustworthiness of these new technologies? And furthermore, how should these technologies and the environments in which they are situated be designed to improve their epistemic trustworthiness? In this article, Nicolas Tanchuk and Rebecca Taylor argue for a shared responsibility model of epistemic trust that includes a duty to collaboratively improve the epistemic environment. Building off prior frameworks, the model they advance identifies five higher-order criteria to assess the epistemic credibility of individuals, tools, and institutions and to guide the co-creation of the epistemic environment: (1) epistemic motivation, (2) epistemic inclusivity, (3) epistemic accountability, (4) epistemic accuracy, and (5) reciprocal epistemic transparency.