Abstract
There are three main traditional accounts of vagueness : one takes it as a genuinely metaphysical phenomenon, one takes it as a phenomenon of ignorance, and one takes it as a linguistic or conceptual phenomenon. In this paper I first very briefly present these views, especially the epistemicist and supervaluationist strategies, and shortly point to some well-known problems that the views carry. I then examine a 'statistical epistemicist' account of vagueness that is designed to avoid precisely these problems – it will be a view that provides an account of the phenomenon of vagueness as coming from our linguistic practices, while insisting that meaning supervenes on use, and that our use of vague terms does yield sharp and precise meanings, which we ignore, thus allowing bivalence to hold.