Abstract
Agent-relative consequentialism is thought attractive because it can secure agent-centred constraints while retaining consequentialism's compelling idea—the idea that it is always permissible to bring about the best available outcome. We argue, however, that the commitments of agent-relative consequentialism lead it to run afoul of a plausibility requirement on moral theories. A moral theory must not be such that, in any possible circumstance, were every agent to act impermissibly, each would have more reason to prefer the world thereby actualized over the world that would have been actualized if every agent had instead acted permissibly.