Formal models of appearance and reality have proved fruitful for investigating structural properties of perceptual knowledge. This paper applies the same approach to epistemic justification. Our central goal is to give a simple account of The Preface, in which justified belief fails to agglomerate. Following recent work by a number of authors, we understand knowledge in terms of normality. An agent knows p iff p is true throughout all relevant normal worlds. To model The Preface, we appeal to the normality (...) of error. Sometimes, it is more normal for reality and appearance to diverge than to match. We show that this simple idea has dramatic consequences for the theory of knowledge and justification. Among other things, we argue that a proper treatment of The Preface requires a departure from the internalist idea that epistemic justification supervenes on the appearances and the widespread idea that one knows most when free from error. (shrink)
According to the KK-principle, knowledge iterates freely. It has been argued, notably in Greco, that accounts of knowledge which involve essential appeal to normality are particularly conducive to defence of the KK-principle. The present article evaluates the prospects for employing normality in this role. First, it is argued that the defence of the KK-principle depends upon an implausible assumption about the logical principles governing iterated normality claims. Once this assumption is dropped, counter-instances to the principle can be expected to arise. (...) Second, it is argued that even if the assumption is maintained, there are other logical properties of normality which can be expected to lead to failures of KK. Such failures are noteworthy, since they do not depend on either a margins-for-error principle or safety condition of the kinds Williamson appeals to in motivating rejection KK. “Introduction: KK and Being in a Position to Know” Section formulates two versions of the KK-Principle; “Inexact Knowledge and Margins for Error” Section presents a version of Williamson’s margins-for-error argument against it; “Knowledge and Normality” and “Iterated Normality” Sections discuss the defence of the KK-Principle due to Greco and show that it is dependent upon the implausible assumption that the logic of normality ascriptions is at least as strong as K4; finally, “Knowledge in Abnormal Conditions” and “Higher-Order Ignorance Inside the Margins” Sections argue that a weakened version of Greco’s constraint on knowledge is plausible and demonstrate that this weakened constraint will, given uncontentious assumptions, systematically generate counter-instances to the KK-principle of a novel kind. (shrink)
In non‐literal uses of language, the content an utterance communicates differs from its literal truth conditions. Loose talk is one example of non‐literal language use (amongst many others). For example, what a loose utterance of (1) communicates differs from what it literally expresses: (1) Lena arrived at 9 o'clock. Loose talk is interesting (or so I will argue). It has certain distinctive features which raise important questions about the connection between literal and non‐literal language use. This paper aims to (i.) (...) introduce a range of novel data demonstrating certain overlooked features of loose talk, and (ii.) develop a new theory of the phenomenon which accounts for these data. In particular, this theory is motivated by the need to explain minimal pairs such as (2)-(3): (2) Lena arrived at 9 o'clock, but she did not arrive at 9 o'clock exactly. (3) ?? Lena did not arrive at 9 o'clock exactly, but she arrived at 9 o'clock. (2) and (3) agree in their truth conditions. Yet they differ in felicity. As such, they constitute a problem for any account which hopes to predict the acceptability of the loose use of a sentence from its truth conditions and the context of utterance alone. Instead, it will be argued, to explain loose talk phenomena we must posit an additional layer of meaning outstripping truth conditions. This layer of meaning is shown to exhibit a range of properties, all of which point to its being semantically encoded. Thus, if correct, the theory provides a new example of how semantic meaning must extend beyond literal, truth‐conditional content. (shrink)
Inquiry aims at knowledge. Your inquiry into a question succeeds just in case you come to know the answer. However, combined with a common picture on which misleading evidence can lead knowledge to be lost, this view threatens to recommend a novel form of dogmatism. At least in some cases, individuals who know the answer to a question appear required to avoid evidence bearing on it. In this paper, we’ll aim to do two things. First, we’ll present an argument for (...) this novel form of dogmatism and show that it presents a substantive challenge. Second, we’ll consider a way those who take knowledge to be the aim of inquiry can mount a response. In the course of doing so, we’ll try to get clearer on the normative connections between inquiry, knowledge and evidence gathering. (shrink)
Suppositional theories of conditionals take apparent similarities between supposition and conditionals as a starting point, appealing to features of the former to provide an account of the latter. This paper develops a novel form of suppositional theory, one which characterizes the relationship at the level of semantics rather than at the level of speech acts. In the course of doing so, it considers a range of novel data which shed additional light on how conditionals and supposition interact.
There is a large literature exploring how accuracy constrains rational degrees of belief. This paper turns to the unexplored question of how accuracy constrains knowledge. We begin by introducing a simple hypothesis: increases in the accuracy of an agent’s evidence never lead to decreases in what the agent knows. We explore various precise formulations of this principle, consider arguments in its favour, and explain how it interacts with different conceptions of evidence and accuracy. As we show, the principle has some (...) noteworthy consequences for the wider theory of knowledge. First, it implies that an agent cannot be justified in believing a set of mutually inconsistent claims. Second, it implies the existence of a kind of epistemic blindspot: it is not possible to know that one’s evidence is misleading. (shrink)
This paper investigates the interaction of phenomena associated with loose talk with embedded contexts. §1. introduces core features associated with the loose interpretation of an utterance and presents a sketch of how to theorise about such utterances in terms of a relation of ‘pragmatic equivalence’. §2. discusses further features of loose talk arising from interaction with ‘loose talk regulators’, negation and conjunction. §§3-4. introduce a hybrid static/dynamic framework and show how it can be employed in developing a fragment which accounts (...) for the data surveyed in §§1-2. (shrink)
Some utterances of imperative clauses have directive force—they impose obligations. Others have permissive force—they extend permissions. The dominant view is that this difference in force is not accompanied by a difference in semantic content. Drawing on data involving free choice items in imperatives, I argue that the dominant view is incorrect.
Indicative and subjunctive conditionals are in non-complimentary distribution: there are conversational contexts at which both are licensed (Stalnaker (1975), Karttunen & Peters (1979), von Fintel (1998)). This means we can ask an important, but under-explored, question: in contexts which license both, what relations hold between the two? -/- In this paper, I’ll argue for an initially surprising conclusion: when attention is restricted to the relevant contexts, indicatives and subjunctives are co-entailing. §1 introduces the indicative/subjunctive distinction, along with a discussion of (...) the relevant notion of entailment; §2 presents the main argument of the paper, and §3 considers some of the philosophical implications the argument in §2. Finally, §4 argues that we can reconcile the equivalence of indicatives and subjunctives with apparently conflicting judgments. (shrink)
We investigate a novel use of the English temporal modifier ‘now’, in which it combines with a subordinate clause. We argue for a univocal treatment of the expression, on which the subordinating use is taken as basic and the non-subordinating uses are derived. We start by surveying central features of the latter uses which have been discussed in previous work, before introducing key observations regarding the subordinating use of ‘now’ and its relation to deictic and anaphoric uses. All of these (...) data, it is argued, can be accounted for on our proposed analysis. We conclude by comparing ‘now’ to a range of other expressions which exhibit similar behavior. (shrink)
Research into the cognition of conditionals has predominantly focused on conditional reasoning, producing a range of theories which explain associated phenomena with considerable success. However, such theories have been less successful in accommodating experimental data concerning how agents assess the probability of indicative conditionals. Since an acceptable account of conditional reasoning should be compatible with evidence regarding how we evaluate conditionals’ likelihoods, this constitutes a failing of such theories. Section 1 introduces the most dominant established approach to conditional reasoning: mental (...) models theory. Surveying a range of experimental results, I show that mental models theory (along with competing theories) is incapable of fully accounting for findings regarding judgements about conditionals’ probabilities. Section 2 introduces an alternative account of deductive reasoning, the erotetic theory, recently proposed by Koralus and Mascarenhas (2013). Section 3 argues that, given a natural extension, this theory is able to explain the otherwise unaccounted for data. (shrink)