Abstract
In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a position that is intermediate between the modified pure rational-choice account and an account that gives rise to trustworthy AI, might allow us to address the practical problem of trust, before identifying and critically evaluating two candidate trust-engineering approaches.