Abstract
Artificial Intelligence (AI) is a technology widely used to support human decision-making. Current areas of application include financial services, engineering, and management. A number of attempts to introduce AI decision support systems into areas which more obviously include moral judgement have been made. These include systems that give advice on patient care, on social benefit entitlement, and even ethical advice for medical professionals. Responding to these developments raises a complex set of moral questions. This paper proposes a clearer replacement question to them. The replacement question asks under what circumstances, if any, people would accept a moral judgement made by some sort of machine. Since, it is argued, the answer to this replacement question is positive, urgent practical moral problems are raised.