John Burgess has recently argued that Timothy Williamson’s attempts to avoid the objection that his theory of vagueness is based on an untenable metaphysics of content are unsuccessful. Burgess’s arguments are important, and largely correct, but there is a mistake in the discussion of one of the key examples. In this note I provide some alternative examples and use them to repair the mistaken section of the argument.
What kind of semantics should someone who accepts the epistemicist theory of vagueness defended in Timothy Williamson’s Vagueness (1994) give a definiteness operator? To impose some interesting constraints on acceptable answers to this question, I will assume that the object language also contains a metaphysical necessity operator and a metaphysical actuality operator. I will suggest that the answer is to be found by working within a three-dimensional model theory. I will provide sketches of two ways of extracting an epistemicist semantics (...) from that model theory, one of which I will find to be more plausible than the other. (shrink)
Epistemicism about vagueness is the view that vagueness, or indeterminacy, is an epistemic matter. Truthmaker-gap epistemicism is the view that indeterminate truths are indeterminate because their truth is not grounded by any worldly fact. Both epistemicism in general and truthmaker-gap epistemicism originated in Roy Sorensen's work on vagueness. My aim in this paper is to give a characterization of truthmaker-gap epistemicism and argue that the view is incompatible with higher-order vagueness: vagueness in whether some case (...) of the form ‘it is determinate that A’ or ‘it is indeterminate whether A’ is true. Since it is highly likely that there is higher-order vagueness, truthmaker-gap epistemicism is in an uncomfortable position. (shrink)
One well known approach to the soritical paradoxes is epistemicism, the view that propositions involving vague notions have definite truth values, though it is impossible in principle to know what they are. Recently, Paul Horwich has extended this approach to the liar paradox, arguing that the liar proposition has a truth value, though it is impossible to know which one it is. The main virtue of the epistemicist approach is that it need not reject classical logic, and in particular (...) the unrestricted acceptance of the principle of bivalence and law of excluded middle. Regardless of its success in solving the soritical paradoxes, the epistemicist approach faces a number of independent objections when it is applied to the liar paradox. I argue that the approach does not offer a satisfying, stable response to the paradoxes—not in general, and not for a minimalist about truth like Horwich. (shrink)
There are three main traditional accounts of vagueness : one takes it as a genuinely metaphysical phenomenon, one takes it as a phenomenon of ignorance, and one takes it as a linguistic or conceptual phenomenon. In this paper I first very briefly present these views, especially the epistemicist and supervaluationist strategies, and shortly point to some well-known problems that the views carry. I then examine a 'statistical epistemicist' account of vagueness that is designed to avoid precisely these problems – it (...) will be a view that provides an account of the phenomenon of vagueness as coming from our linguistic practices, while insisting that meaning supervenes on use, and that our use of vague terms does yield sharp and precise meanings, which we ignore, thus allowing bivalence to hold. (shrink)
Stewart Shapiro has objected to the epistemicist theory of vagueness on grounds that it gives counterintuitive predictions about cases involving conditional obligation. This paper details a response on the epistemicist’s behalf. I first argue that Shapiro’s own presentation of the objection is unsuccessful as an argument against epistemicism. I then reconstruct and offer two alternative arguments inspired by Shapiro’s considerations, and argue that these fail too, given the information-sensitive nature of conditional obligations.
Derek Parfit's combined-spectrum argument seems to conflict with epistemicism, a viable theory of vagueness. While Parfit argues for the indeterminacy of personhood, epistemicism denies indeterminacy. But, we argue, the linguistically based determinacy that epistemicism supports lacks the sort of normative or ontological significance that concerns Parfit. Thus, we reformulate his argument to make it consistent with epistemicism. We also dispute Roy Sorensen's suggestion that Parfit's argument relies on an assumption that fuels resistance to epistemicism, namely, (...) that 'the magnitude of a modification must be proportional to its effect.'. (shrink)
Epistemicism seems to be the most dominating approach to vagueness in the recent twenty years. In the logical and philosophical tradition, e.g. Peirce, vagueness does not depend on human knowledge. Epistemicists deny this fact and contend that vagueness is merely the result of our imperfect mind, our dearth of knowledge, sort of phantom, finally, that it simply does not exist. In my opinion, such a stance not only excludes vagueness comprehended in terms of human knowledge, but which is worse, (...) stems from spurious logical arguments. The part of arguments called Sorensen’s Arguments or even Proofs were the subject of my analysis in the book Paradoksy and in the paper “Epistemicism and Roy Sorensen Arguments” published in the Bulletin of the Section of Logic. Here I shall only briefly refer to these works and focus mainly on the arguments launched by Tymothy Williamson. One of them is to uncover why we are not able to recognize the alleged sharp boundary between positive and negative extensions of any vague predicates. Williamson’s reasoning is based on his margin for error principle. Another argumentation of Williamson aims at the refutation of the principle I know that I know. It should be emphasized that all the aforementioned arguments are fundamental for epistemicism and all of them are fallacious because of either formal or false-premise fallacy. There is the circumstance that we cannot deem epistemicism logical. Finally, we show that within the epistemic frame the following thesis is valid: if what epistemicism states is the case, then what epistemicism states is not the case. This immediately implies → ¬p’) that it is not the case what epistemicism states. So, either epistemicism or logic. (shrink)
Let us say that the proposition that p is transparent just in case it is known that p, and it is known that it is known that p, and it is known that it is known that it is known that p, and so on, for any number of iterations of the knowledge operator ‘it is known that’. If there are transparent propositions at all, then the claim that any man with zero hairs is bald seems like a good candidate. (...) We know that any man with zero hairs is bald. And it also does not seem completely implausible that we know that we know it, and that we know that we know that we know it, and so on. (shrink)
This paper considers the connections between semantic shiftiness (plasticity), epistemic safety and an epistemic theory of vagueness as presented and defended by Williamson (1996a, b, 1997a, b). Williamson explains ignorance of the precise intension of vague words as rooted in insensitivity to semantic shifts: one’s inability to detect small shifts in intension for a vague word results in a lack of knowledge of the word’s intension. Williamson’s explanation, however, falls short of accounting for ignorance of intension.
In this paper I argue, first, that the only difference between Epistemicism and Nihilism about vagueness is semantic rather than ontological, and second, that once it is clear what the difference between these views is, Nihilism is a much more plausible view of vagueness than Epistemicism. Given the current popularity of certain epistemicist views, this result is, I think, of interest.
A formal result is proved which is used in Juhani Yli-Vakkuri’s ‘Epistemicism and Modality’ to argue that certain two-dimensional possible world models are inadequate for a language with operators for ‘necessarily’, ‘actually’, and ‘definitely’.
It is taken for granted in much of the literature on vagueness that semantic and epistemic approaches to vagueness are fundamentally at odds. If we can analyze borderline cases and the sorites paradox in terms of degrees of truth, then we don’t need an epistemic explanation. Conversely, if an epistemic explanation suﬃces, then there is no reason to depart from the familiar simplicity of classical bivalent semantics. I question this assumption, showing that there is an intelligible motivation for adopting a (...) many-valued semantics even if one accepts a form of epistemicism. The resulting hybrid view has advantages over both classical epistemicism and traditional many-valued approaches. (shrink)
That any filled location of spacetime contains a persisting thing has been defended based on the ‘argument from vagueness.’ It is often assumed that since the epistemicist account of vagueness blocks the argument from vagueness it facilitates a conservative ontology without gerrymandered objects. It doesn't. The epistemic vagueness of ordinary object predicates such as ‘bicycle’ requires that objects that can be described as almost-but-not-quite-bicycle exist even though they fall outside the predicate's sharp extension. Since the predicates that begin with ‘almost’ (...) are vague as well, epistemicism's ontological backdrop is far from the conservative picture it is thought to enable. (shrink)
This paper presents some difficulties for Timothy Williamson's epistemicist view of vagueness and for an argument he gives in its defense. First, I claim that the argument, which uses the notion of an "omniscient speaker", is question-begging. Next, I argue that some presumably true scientific hypotheses, which postulate certain relations between everyday vague predicates and scientific predicates, make the central theses of epistemicism highly implausible. Finally, I show that the "margin for error principles" used by Williamson to explain away (...) the kind of ignorance conjectured by epistemicism lead to new sorites-like arguments with unacceptable conclusions. (shrink)
What is the origin of individual differences in ideology and personality? According to the parasite stress hypothesis, the structure of a society and the values of individuals within it are both influenced by the prevalence of infectious disease within the society's geographical region. High levels of infection threat are associated with more ethnocentric and collectivist social structures and greater adherence to social norms, as well as with socially conservative political ideology and less open but more conscientious personalities. Here we use (...) an agent-based model to explore a specific opportunities-parasites trade-off hypothesis, according to which utility-maximizing agents place themselves at an optimal point on a trade-off between the gains that may be achieved through accessing the resources of geographically or socially distant out-group members through openness to out-group interaction, and the losses arising due to consequently increased risks of exotic infection to which immunity has not been developed. We examine the evolution of cooperation and the formation of social groups within social networks, and we show that the groups that spontaneously form exhibit greater local rather than global cooperative networks when levels of infection are high. It is suggested that the OPTO model offers a first step toward understanding the specific mechanisms through which environmental conditions may influence cognition, ideology, personality, and social organization. (shrink)
Are there any such things as mind viruses? By analogy with biological parasites, such cultural items are supposed to subvert or harm the interests of their host. Most popularly, this notion has been associated with Richard Dawkins’ concept of the “selfish meme”. To unpack this claim, we first clear some conceptual ground around the notions of cultural adaptation and units of culture. We then formulate Millikan’s challenge: how can cultural items develop novel purposes of their own, cross-cutting or subverting (...) human purposes? If this central challenge is not met, talk of cultural ‘parasites’ or ‘selfish memes’ will be vacuous or superfluous. First, we discuss why other attempts to answer Millikan’s challenge have failed. In particular, we put to rest the claims of panmemetics, a somewhat sinister worldview according to which human culture is nothing more than a swarm of selfish agents, plotting and scheming behind the scenes. Next, we reject a more reasonable, but still overly permissive approach to mind parasites, which equates them with biologically maladaptive culture. Finally, we present our own answer to Millikan’s challenge: certain systems of misbelief can be fruitfully treated as selfish agents developing novel purposes of their own. In fact, we venture that this is the only way to properly understand them. Systems of misbelief are designed to spread in a viral-like manner, without any regard to the interests of their human hosts, and with possibly harmful consequences. As a proof of concept, we discuss witchcraft beliefs in early modern Europe. In this particular case, treating cultural representations as “parasites” – i.e. adopting the meme’s eye view – promises to shed new light on a mystery that historians and social scientists have been wrestling with for decades. (shrink)
Are there any such things as mind viruses? By analogy with biological parasites, such cultural items supposed to subvert or harm the interests of their host. Most popularly, this notion has been associated with Richard Dawkins’ concept of the “selfish meme”. To unpack this claim, we first clear some conceptual ground around the notion of cultural adaptation and units called ‘memes’. We then formulate Millikan’s challenge: how can cultural items develop novel purposes of their own, cross-cutting or subverting human (...) purposes? If this central challenge is not met, meme talk will be vacuous or superfluous. First, we discuss why other attempts to answer Millikan’s challenge have failed. In particular, we put to rest the claims of panmemetics, a somewhat sinister worldview which treats all of culture as swarms of selfish agents. Next, we reject a more reasonable, but still overly permissive approach to cultural parasitism, which equates mind parasites with biologically maladaptive culture. Finally, we present our own answer to Millikan’s challenge: certain systems of misbelief can be fruitfully treated as selfish agents developing novel purposes of their own. Such mind parasites are designed to spread in a viral-like manner, without any regard to the interests of their human hosts. As a case study, we discuss the witch hunts of early modern Europe. In this particular case, adopting the meme’s eye view promises to shed new light on a mystery that historians and social scientists have been wrestling with for decades. (shrink)
This paper targets a series of potential issues for the discussion of, and modal resolution to, the alethic paradoxes advanced by Scharp (2013). I aim, then, to provide a novel, epistemicist treatment of the alethic paradoxes. In response to Curry's paradox, the epistemicist solution that I advance enables the retention of both classical logic and the traditional rules for the alethic predicate: truth-elimination and truth-introduction. By availing of epistemic modal logic, the epistemicist approach permits, further, of a descriptively adequate explanation (...) of the indeterminacy that is exhibited by epistemic states concerning liar-paradoxical sentences. (shrink)
This article is concerned with exploring the idea of places as providing persons with nourishment. This version of person–place relations is displayed in a paper by McHugh and, in provocative fashion, in Michel Serres’s analysis of the human condition as a parasitic one. Unlike McHugh, Serres combines his analysis of parasites with a concern that principled actors may be insufficiently attached to places. His views are revealed in his interpretations of works by Molière and Plato. By reinterpreting these works, (...) I try to suggest that Serres’s well-founded scepticism as to the level of commitment of principled actors to the places that, as he rightly points out, are nourishing them, may not apply to the sub-set of principled actors who deserve to be called particular. (shrink)
This paper distinguishes between epistemic and metaphysical problems of arbitrariness for vagueness. It argues that epistemicism can resolve the epistemic problem of arbitrariness but not the metaphysical one.
Epistemicism is the view that seemingly vague predicates are not in fact vague. Consequently, there must be a sharp boundary between a man who is bald and one who is not bald. Although such a view is often met with incredulity, my aim is to provide a defense of epistemicism in this essay. My defense, however, is backhanded: I argue that the formal commitments of epistemicism are the result of good practical reasoning, not metaphysical necessity. To get (...) to that conclusion, I spend most of the essay arguing that using a formal system like classical logic to manage seemingly vague situations requires practical principles to mediate between the formalism and what it aims to represent. (shrink)
The categorization of individuals or groups as social parasites has often been treated as an example of semantic transfer from the biological to the social domain. Historically, however, the scientific uses of the term parasite cannot be deemed to be primary, as their emergence in the seventeenth and eighteenth centuries was preceded by a much older tradition of religious and social terminology. Its social use in modern times, on the other hand, builds on a secondary metaphorization from the scientific (...) source concept. This article charts the history of the term parasite from its etymological origins to the present day, distinguishes its metaphorical and non-metaphorical uses, and discusses the implications of these findings regarding the cognitive understanding of the relationship between literal and metaphorical meanings. In conclusion, it is argued that metaphorization needs to be analyzed not only in terms of its conceptual structure but also in its role in discourse history. (shrink)
That any filled location of spacetime contains a persisting thing has been defended based on the ‘argument from vagueness.’ It is often assumed that since the epistemicist account of vagueness blocks the argument from vagueness it facilitates a conservative ontology without gerrymandered objects. It doesn't. The epistemic vagueness of ordinary object predicates such as ‘bicycle’ requires that objects that can be described as almost‐but‐not‐quite‐bicycle exist even though they fall outside the predicate's sharp extension. Since the predicates that begin with ‘almost’ (...) are vague as well, epistemicism's ontological backdrop is far from the conservative picture it is thought to enable. (shrink)
The paper challenges Williamson’s safety based explanation for why we cannot know the cut-off point of vague expressions. We assume throughout (most of) the paper that Williamson is correct in saying that vague expressions have sharp cut-off points, but we argue that Williamson’s explanation for why we do not and cannot know these cut-off points is unsatisfactory. -/- In sect 2 we present Williamson's position in some detail. In particular, we note that Williamson's explanation relies on taking a particular safety (...) principle ('Meta-linguistic belief safety' or 'MBS') as a necessary condition on knowledge. In section 3, we show that even if MBS were a necessary condition on knowledge, that would not be sufficient to show that we cannot know the cut-off points of vague expressions. In section 4, we present our main case against Williamson's explanation: we argue that MBS is not a necessary condition on knowledge, by presenting a series of cases where one's belief violates MBS but nevertheless constitutes knowledge. In section 5, we present and respond to an objection to our view. And in section 6, we briefly discuss the possible directions a theory of vagueness can take, if our objection to Williamson's theory is taken on board. (shrink)
This paper consists of two parts. The first concerns the logic of vagueness. The second concerns a prominent debate in metaphysics. One of the most widely accepted principles governing the ‘definitely’ operator is the principle of Distribution: if ‘p’ and ‘if p then q’ are both definite, then so is ‘q’. I argue however, that epistemicists about vagueness should reject this principle. The discussion also helps to shed light on the elusive question of what, on this framework, it takes for (...) a sentence to be borderline or definite. In the second part of the paper, I apply this result to a prominent debate in metaphysics. One of the most influential arguments in favour of Universalism about composition is the Lewis-Sider argument from vagueness. An interesting question, however, is whether epistemicists have any particular reasons to resist the argument. I show that there is no obvious reason why epistemicists should resist the argument but there is a non-obvious one: the rejection of Distribution argued for in the first part of the paper provides epistemicists with a unique way of resisting the argument from vagueness. (shrink)
In this commentary on Daniel Dennett's 'From Bacteria to Bach and Back', I make some suggestions to strengthen the meme concept, in particular the hypothesis of cultural parasitism. This is a notion that has both caused excitement among enthusiasts and raised the hackles of critics. Is the “meme” meme itself an annoying piece of malware, which has infected and corrupted the mind of an otherwise serious philosopher? Or is it an indispensable theoretical tool, as Dennett believes, which deserves to be (...) spread far and wide? (shrink)
Recent advances in immunology have provided a foundation of knowledge to understand many of the intricacies involved in manipulating the human response to fight parasitic infections, and a great deal has been learned from malaria vaccine efforts regarding strategies for developing parasite vaccines. There has been some encouraging progress in the development of a Chagas vaccine in animal models. A prize fund for Chagas could be instrumental in ensuring that these efforts are translated into products that benefit patients.
What happens to education when the potential it helps realizing in the individual works against the formal purposes of the curriculum? What happens when education becomes a vehicle for its own subversion? As a subject-forming state apparatus working on ideological speciesism, formal education is engaged in both human and animal stratification in service of the capitalist knowledge economy. This seemingly stable condition is however insecured by the animal rights activist as undercover learner and—worker, who enters education and research laboratories under (...) false premises in order to extract the knowledge necessary to dismantle the logic of animal utility on which the scientific-educational apparatus rests. The present article is based on a semi-structured interview with an undercover worker. It draws on a synthesis of critical education and posthumanist theories to configure knowledge creation and subjectification processes in the “negative spaces” of education. The techne of undercover work includes mnemotechnical and prosthetic devices, calculation of risk, and mimetic labor. The article argues that the agenda of the undercover worker generates a multi-strained mimetic complex that composes a parasitic educational subject-assemblage redirecting scientific knowledge away from the animal stratification logic of the knowledge economy into different viral circuits; different lines of flight. It invites a rearticulation of the formal education state apparatus in more indeterminate directions, provoking scientific-educational knowledge-practices to become a catalytic impulse for their own disintegration. (shrink)
Whether any property is internal to a particular object may be taken to depend upon the way in which the object is described. Thus it is not an internal property of Scott to have been the author of Waverley, neither is it an internal property of the author of Ivanhoe. But what of the author of Waverley? Is the proposition that the author of Waverley composed Waverley necessarily true? On one interpretation of it it surely is. Even so, one can (...) attach a sense to saying that the person who was in fact the author of Waverley might not have been so. All that is needed for this is that he be capable of being otherwise identified. (shrink)
This article introduces the idea of ‘dependence subtexts’ to explain how the stories that we encounter in property theory and public rhetoric function to make some actors appear ‘independent’, and thus capable of acquiring property in their own right, while making other actors appear ‘dependent’ and thus incapable of acquiring property. The argument develops the idea of ‘dependence subtexts’ out of the work of legal scholar Carol Rose and political theorist Carole Pateman, before using it as a tool for contrasting (...) the canonical property stories of John Locke and Pierre-Joseph Proudhon. We argue that the link between property and dependence provides a useful starting point for understanding issues of economic justice that share a common political problem: how do we choose to govern the relation between dependence and independence through the institution of property? (shrink)