In a recent reply to our article, “What is Interpretability?,” Prasetya argues against our position that artificial neural networks are explainable. It is claimed that our indefeasibility thesis—that adding complexity to an explanation of a phenomenon does not make the phenomenon any less explainable—is false. More precisely, Prasetya argues that unificationist explanations are defeasible to increasing complexity, and thus, we may not be able to provide such explanations of highly complex AI models. The reply highlights an important lacuna in our original paper, the omission of the unificationist account of explanation, and affords us the opportunity to respond. Here, we argue that artificial neural networks are explainable in a way that should satisfy unificationists and that interpretability methods present ways in which ML theories can achieve unification.