Artificially intelligent systems (AISs) are being created by software developing companies (SDCs) to influence clinical decision‐making. Historically, clinicians have led healthcare decision‐making, and the introduction of AISs makes SDCs novel actors in the clinical decision‐making space. Although these AISs are intended to influence a clinician's decision‐making, SDCs have been clear that clinicians are in fact the final decision‐makers in clinical care, and that AISs can only inform their decisions. As such, the default position is that clinicians should hold responsibility for the outcomes of the use of AISs. This is not the case when an AIS has influenced a clinician's judgement and their subsequent decision. In this paper, we argue that this is an imbalanced and unjust position, and that careful thought needs to go into how personal moral responsibility for the use of AISs in clinical decision‐making should be attributed. This paper employs and examines the difference between prospective and retrospective responsibility and considers foreseeability as key in determining how personal moral responsibility can be justly attributed. This leads us to the view that moral responsibility for the outcomes of using AISs in healthcare ought to be shared by the clinical users and SDCs.