Abstract
Conventional accounts of epistemic opacity, particularly those that stem from the definitive work of Paul Humphreys, typically point to limitations on the part of epistemic agents to account for the distinct ways in which systems, such as computational methods and devices, are opaque. They point, for example, to the lack of technical skill on the part of an agent, the failure to meet standards of best practice, or even the nature of an agent as reasons why epistemically relevant elements of a process may be inaccessible. In this paper I argue that there are certain instances of epistemic opacity— particularly in computational methods such a computer simulations and machine learning processes—that do not arise from, are not responsive to, and are therefore not explained by the epistemic limitations of an agent. I call these instances agent-neutral and agent-independent instances of epistemic opacity respectively. As a result, I also argue that conventional accounts of epistemic opacity offer a limited understanding of the full spectrum of kinds and sources of epistemic opacity, particularly of the kind found in computational methods. In particular, as I will show below, the limitations of these accounts are reflected in the way they fail to provide satisfactory explanations when faced with certain instances of opacity.