Theories of consciousness divide over whether perceptual consciousness is rich or sparse in specific representational content and whether it requires cognitive access. These two issues are often treated in tandem because of a shared assumption that the representational capacity of cognitive access is fairly limited. Recent research on working memory challenges this shared assumption. This paper argues that abandoning the assumption undermines post-cue-based “overflow” arguments, according to which perceptual conscious is rich and does not require cognitive access. Abandoning it also (...) dissociates the rich/sparse debate from the access question. The paper then explores attempts to reformulate overflow theses in ways that don’t require the assumption of limited capacity. Finally, it discusses the problem of relating seemingly non-probabilistic perceptual consciousness to the probabilistic representations posited by the models that challenge conceptions of cognitive access as capacity-limited. (shrink)
Who are the best subjects for judgment tasks intended to test grammatical hypotheses? Michael Devitt ( [2006a] , [2006b] ) argues, on the basis of a hypothesis concerning the psychology of such judgments, that linguists themselves are. We present empirical evidence suggesting that the relevant divide is not between linguists and non-linguists, but between subjects with and without minimally sufficient task-specific knowledge. In particular, we show that subjects with at least some minimal exposure to or knowledge of such tasks tend (...) to perform consistently with one another—greater knowledge of linguistics makes no further difference—while at the same time exhibiting markedly greater in-group consistency than those who have no previous exposure to or knowledge of such tasks and their goals. (shrink)
Michael Devitt ([2006a], [2006b]) argues that, insofar as linguists possess better theories about language than non-linguists, their linguistic intuitions are more reliable. (Culbertson and Gross ) presented empirical evidence contrary to this claim. Devitt () replies that, in part because we overemphasize the distinction between acceptability and grammaticality, we misunderstand linguists' claims, fall into inconsistency, and fail to see how our empirical results can be squared with his position. We reply in this note. Inter alia we argue that Devitt's focus (...) on grammaticality intuitions, rather than acceptability intuitions, distances his discussion from actual linguistic practice. We close by questioning a demand that drives his discussion—viz., that, for linguistic intuitions to supply evidence for linguistic theorizing, a better account of why they are evidence is required. (shrink)
Does perceptual consciousness require cognitive access? Ned Block argues that it does not. Central to his case are visual memory experiments that employ post-stimulus cueing—in particular, Sperling's classic partial report studies, change-detection work by Lamme and colleagues, and a recent paper by Bronfman and colleagues that exploits our perception of ‘gist’ properties. We argue contra Block that these experiments do not support his claim. Our reinterpretations differ from previous critics' in challenging as well a longstanding and common view of visual (...) memory as involving declining capacity across a series of stores. We conclude by discussing the relation of probabilistic perceptual representations and phenomenal consciousness. (shrink)
Zenon Pylyshyn argues that cognitively driven attentional effects do not amount to cognitive penetration of early vision because such effects occur either before or after early vision. Critics object that in fact such effects occur at all levels of perceptual processing. We argue that Pylyshyn’s claim is correct—but not for the reason he emphasizes. Even if his critics are correct that attentional effects are not external to early vision, these effects do not satisfy Pylyshyn’s requirements that the effects be direct (...) and exhibit semantic coherence. In addition, we distinguish our defense from those found in recent work by Raftopoulos and by Firestone and Scholl, argue that attention should not be assimilated to expectation, and discuss alternative characterizations of cognitive penetrability, advocating a kind of pluralism. (shrink)
Linguists often advert to what are sometimes called linguistic intuitions. These intuitions and the uses to which they are put give rise to a variety of philosophically interesting questions: What are linguistic intuitions – for example, what kind of attitude or mental state is involved? Why do they have evidential force and how might this force be underwritten by their causal etiology? What light might their causal etiology shed on questions of cognitive architecture – for example, as a case study (...) of how consciously inaccessible subpersonal processes give rise to conscious states, or as a candidate example of cognitive penetrability? What methodological issues arise concerning how linguistic intuitions are gathered and interpreted – for example, might some subjects' intuitions be more reliable than others? And what bearing might all this have on philosophers' own appeals to intuitions? This paper surveys and critically discusses leading answers to these questions. In particular, we defend a ‘mentalist’ conception of linguistics and the role of linguistic intuitions therein. (shrink)
Fiona Macpherson (2012) argues that various experimental results provide strong evidence in favor of the cognitive penetration of perceptual color experience. Moreover, she proposes a mechanism for how such cognitive penetration occurs. We argue, first, that the results on which Macpherson relies do not provide strong grounds for her claim of cognitive penetrability; and, second, that, if the results do reflect cognitive penetrability, then time-course considerations raise worries for her proposed mechanism. We base our arguments in part on several of (...) our own experiments, reported herein. (shrink)
There is a long tradition of drawing metaphysical conclusions from investigations into language. This paper concerns one contemporary variation on this theme: the alleged ontological significance of cognitivist truth-theoretic accounts of semantic competence. According to such accounts, human speakers’ linguistic behavior is in part empirically explained by their cognizing a truth-theory. Such a theory consists of a finite number of axioms assigning semantic values to lexical items, a finite number of axioms assigning semantic values to complex expressions on the basis (...) of their structure and the semantic values of their constituents, and a finite number of production schemata. The theory enables the derivation of truth-conditions for each sentence of the language: something of roughly the form ‘S is true iff P’.1 The claim that speakers stand in a cognitive relation to such theories is advanced, not as a conceptual analysis of semantic competence or understanding, but rather as an empirical hypothesis about human speakers in particular, one part of a broader empirical account of our linguistic competence and cognition generally. It therefore must mesh with the rest of our theorizing in these areas and whatever relevant data from neighboring inquiries there may be. The precise nature of the cognitive relation a speaker is supposed to bear to a truththeory is a matter of some dispute. I speak of ‘‘cognizing’’ (following. (shrink)
Can one combine Davidsonian semantics with a deflationary conception of truth? Williams argues, contra a common worry, that Davidsonian semantics does not require truth-talk to play an explanatory role. Horisk replies that, in any event, the expressive role of truth-talk that Williams emphasizes disqualifies deflationary accounts—at least extant varieties—from combination with Davidsonian semantics. She argues, in particular, that this is so for Quine's disquotationalism, Horwich's minimalism, and Brandom's prosententialism. I argue that Horisk fails to establish her claim in all three (...) cases. This involves clarifying Quine’s understanding of a purely referential occurrence; explaining how Davidsonians can avail themselves of a syntactic treatment of lexical ambiguity; and correcting a common misreading of Brandom (answering along the way an objection offered by Künne as well). (shrink)
Is temporal representation constitutively necessary for perception? Tyler Burge (2010) argues that it is, in part because perception requires a form of memory sufficiently sophisticated as to require temporal representation. I critically discuss Burge’s argument, maintaining that it does not succeed. I conclude by reflecting on the consequences for the origins of temporal representation.
This paper motivates two bases for ascribing propositional semantic knowledge (or something knowledgelike): first, because it’s necessary to rationalize linguistic action; and, second, because it’s part of an empirical theory that would explain various aspects of linguistic behavior. The semantic knowledge ascribed on these two bases seems to differ in content, epistemic status, and cognitive role. This raises the question: how are they related, if at all? The bulk of the paper addresses this question. It distinguishes a variety of answers (...) and their varying philosophical and empirical commitments. (shrink)
This chapter examines the “externalist” claim that semantics should include theorizing about representational relations among linguistic expressions and (purported) aspects of the world. After disentangling our main topic from other strands in the larger set of externalist-internalist debates, arguments both for and against this claim are discussed. It is argued, among other things, that the fortunes of this externalist claim are bound up with contentious issues concerning the semantics-pragmatics border.
When a debate seems intractable, with little agreement as to how one might proceed towards a resolution, it is understandable that philosophers should consider whether something might be amiss with the debate itself. Famously in the last century, philosophers of various stripes explored in various ways the possibility that at least certain philosophical debates are in some manner deficient in sense. Such moves are no longer so much in vogue. For one thing, the particular ways they have been made have (...) themselves undergone much critical scrutiny, so that many philosophers now feel that there is, for example, a Quinean response to Carnap, a Gricean reply to Austin, and a diluting proliferation of Wittgenstein interpretations.2 Be that as it may,3 there do of.. (shrink)
Stewart Shapiro’s book develops a contextualist approach to vagueness. It’s chock-full of ideas and arguments, laid out in wonderfully limpid prose. Anyone working on vagueness (or the other topics it touches on—see below) will want to read it. According to Shapiro, vague terms have borderline cases: there are objects to which the term neither determinately applies nor determinately does not apply. A term determinately applies in a context iff the term’s meaning and the non-linguistic facts determine that they do. The (...) non-linguistic facts include the “external” context: “comparison class, paradigm cases, contrasting cases, etc.” (33) But external-contextsensitivity is not what’s central to Shapiro’s contextualism. Even fixing external context, vague terms’ (anti-)extensions exhibit sensitivity to internal context: the decisions of competent speakers. According to Shapiro’s open texture thesis, for each borderline case, there is some circumstance in which a speaker, consistently with the term’s meaning and the non-linguistic facts, can judge it to fall into the term’s extension and some circumstance in which the speaker can judge it to fall into the term’s anti-extension: she can “go either way.” Moreover, borderline sentences are Euthyphronically judgment- dependent: a competent speaker’s judging a borderline to fall into a term’s (anti- )extension makes it so. For Shapiro, then, a sentence can be true but indeterminate: a case left unsettled by meaning and the non-linguistic facts (and thus indeterminate, or borderline) may be made true by a competent speaker’s judgment. Importantly, among the non-linguistic facts that constrain speakers’ judgments (at least in the cases Shapiro cares about) is a principle of tolerance: for all x and y, if x and y differ marginally in the relevant respect (henceforth, Mxy), then if one competently judges Bx, one cannot competently judge y in any other manner in the same (total) context.1 This does not require that one judge By: one might not consider the matter at all.. (shrink)
In this note, I clarify the point of my paper “The Nature of Semantics: On Jackendoff’s Arguments” (NS) in light of Ray Jackendoff’s comments in his “Linguistics in Cognitive Science: The State of the Art.” Along the way, I amplify my remarks on unification.
According to cognitivist truth-theoretic accounts of semantic competence, aspects of our linguistic behavior can be explained by ascribing to speakers cognition of truth theories. It's generally assumed on this approach that, however much context sensitivity speakers' languages contain, the cognized truththeories themselves can be adequately characterized context insensitively—that is, without using in the metalanguage expressions whose semantic value can vary across occasions of utterance. In this paper, I explore some of the motivations for and problems and consequences of dropping this (...) assumption. (shrink)
Michael Tye responds to the problem of higher-order vagueness for his trivalent semantics by maintaining that truth-value predicates are “vaguely vague”: it’s indeterminate, on his view, whether they have borderline cases and therefore indeterminate whether every sentence is true, false, or indefinite. Rosanna Keefe objects (1) that Tye’s argument for this claim tacitly assumes that every sentence is true, false, or indefinite, and (2) that the conclusion is any case not viable. I argue – contra (1) – that Tye’s argument (...) needn’t make that assumption. A version of her objection is in fact better directed against other arguments Tye advances, though Tye can absorb this criticism without abandoning his position’s core. On the other hand, Keefe’s second objection does hit the mark: embracing ‘vaguely vague’ truth-value predicates undermines Tye’s ability to support validity claims needed to defend his position. To see this, however, we must develop Keefe’s remarks further than she does. (shrink)
Donald Davidson aims to illuminate the concept of meaning by asking: What knowledge would suffice to put one in a position to understand the speech of another, and what evidence sufficiently distant from the concepts to be illuminated could in principle ground such knowledge? Davidson answers: knowledge of an appropriate truth-theory for the speaker’s language, grounded in what sentences the speaker holds true, or prefers true, in what circumstances. In support of this answer, he both outlines such a truth-theory for (...) a substantial fragment of a natural language and sketches a procedure—radical interpretation—that, drawing on such evidence, could confirm such a theory. Bracketing refinements (e.g., those introduced to.. (shrink)
Drawing upon research in philosophical logic, linguistics and cognitive science, this study explores how our ability to use and understand language depends upon our capacity to keep track of complex features of the contexts in which we converse.
Should a theory of meaning state what sentences mean, and can a Davidsonian theory of meaning in particular do so? Max Kölbel answers both questions affirmatively. I argue, however, that the phenomena of non-homophony, non-truth-conditional aspects of meaning, semantic mood, and context-sensitivity provide prima facie obstacles for extending Davidsonian truth-theories to yield meaning-stating theorems. Assessing some natural moves in reply requires a more fully developed conception of the task of such theories than Kölbel provides. A more developed conception is also (...) required to defend his positive answer to the first question above. I argue that, however Kölbel might elaborate his position, it can’t be by embracing the sort of cognitivist account of Davidsonian semantics to which he sometimes alludes. (shrink)
Jim Hopkins defends a ‘straight’ response to Wittgenstein’s rule-following considerations, a response he ascribes to Wittgenstein himself. According to this response, what makes it the case that A means that P is that it is possible for another to interpret A as meaning that P. Hopkins thus advances a form of interpretivist judgment-dependence about meaning. I argue that this response, as well as a variant, does not succeed.
Jackendoff defends a mentalist approach to semantics that investigates conceptual structures in the mind/brain and their interfaces with other structures, including specifically linguistic structures responsible for syntactic and phonological competence. He contrasts this approach with one that seeks to characterize the intentional relations between expressions and objects in the world. The latter, he argues, cannot be reconciled with mentalism. He objects in particular that intentionality cannot be naturalized and that the relevant notion of object is suspect. I critically discuss these (...) objections, arguing in part that Jackendoff’s position rests on questionable philosophical assumptions. (shrink)
Cappelen and Lepore (2005) argue that "[s]peakers need not believe everything they sincerely say." I argue that their latest (2006a) defence of this claim proposes a problematic principle that does not yield their surprising conclusion.
There is nothing in [the six chapters that make up the body of Articulating Reasons] that will come as a surprise to anyone who has mastered [Making It Explicit]. … I had in mind audiences that had perhaps not so much as dipped into the big book but were curious about its themes and philosophical consequences. (35–36).
The claims are grounded in a wealth of fascinating data, particularly on primate and young child communication and social cognition, much produced by Tomasello’s own lab. But there is certainly no dearth of stimulating speculation. Tomasello’s story is rich and complex. In what follows, I focus on aspects of the three hypotheses listed above, offering some commentary as I go.
Fiona Cowie’s _What’s Within_ consists of three parts. In the first, she examines the early modern rationalist- empiricist debate over nativism, isolating what she considers the two substantive “strands” (67)1 that truly separated them: whether there exist domain-specific learning mechanisms, and whether concept acquisition is amenable to naturalistic explanation. She then turns, in the book’s succeeding parts, to where things stand today with these issues. The second part argues that Jerry Fodor’s view of concepts is continuous with traditional nativism in (...) that it precludes a naturalistic story of concept acquisition. Cowie objects, however, to Fodor’s path to this conclusion and thus sees no reason to endorse it. The third part assesses Chomskyan nativism as a contemporary instance of positing domain- specific learning mechanisms. Though she is highly critical of how “poverty of the stimulus” arguments and the like have been used to lend credence to stronger conclusions, she holds that such arguments do indeed support the nativist’s domain-specificity claim. Cowie’s reconsideration of nativism thus limits itself to concepts and language (a few exceptions aside: there are two brief forays into face recognition and a mention of pathogen response). The terrain she does cover, however, is vast; and Cowie’s illuminating discussions will stimulate anyone interested in the area. As I focus on a few large-scale qualms in what follows, let me mention in particular that much of what is of interest in Cowie’s book is to be found in her detailed consideration of specific arguments. (shrink)
Should a theory of meaning state what sentences mean, and can a Davidsonian theory of meaning in particular do so? Max Ko¨lbel answers both questions affirmatively. I argue, however, that the phenomena of non-homophony, non-truth-conditional aspects of meaning, semantic mood, and context-sensitivity provide prima facie obstacles for extending Davidsonian truth-theories to yield meaning-stating theorems. Assessing some natural moves in reply requires a more fully developed conception of the task of such theories than Ko¨lbel provides. A more developed conception is also (...) required to defend his positive answer to the first question above. I argue that, however Ko¨lbel might elaborate his position, it can’t be by embracing the sort of cognitivist account of Davidsonian semantics to which he sometimes alludes. (shrink)
Normal mature human language users arguably possess two kinds of knowledge of meaning. On the one hand, they possess semantic knowledge that rationalizes their linguistic behavior. This knowledge can be characterized homophonically, can be self-ascribed without adverting to 3rd-person evidence, and is accessible to consciousness. On the other hand, there are empirical grounds for ascribing to them knowledge, or cognition, of a compositional semantic theory. This knowledge lacks the three qualities listed above. This paper explores the possible relations among these (...) two kinds of semantic knowledge. Is the former derived from the latter? Do these ascriptions in fact characterize the same states albeit in different ways? Special attention is paid to the varying philosophical and empirical commitments that different answers incur. (shrink)
Concepts, the 1996 John Locke Lectures, synthesizes and develops Fodor’s views on the eponymous topic. It’s immensely stimulating. Anyone working in the area will need to study its trenchant critical discussion of key positions in philosophy, linguistics, and psychology. These readers will be rewarded as well by the book’s many illuminating asides and its more constructive closing chapters. With its wealth of ideas and enjoyably Fodorian prose, Concepts auspiciously inaugurates the Oxford Cognitive Science Series. Oxford University Press is also to (...) be commended for making Concepts immediately available in paperback, though it contains far too many typos. (shrink)
Can all truths be stated in precise language? Not if true indirect speech reports of assertions entered using vague language must themselves use vague language. Sententialism – the view that an indirect speech report is true if and only if the report’s complement clause “same-says” the sentence the original speaker uttered – provides two ways of resisting this claim: first, by allowing that precise language can “same-say” vague language; second, by implying that expressions occurring in an indirect speech report’s complement (...) clause are not used. I reject the first line of resistance, but argue that the second is successful if one accepts sententialism. (shrink)
There is a long tradition of drawing metaphysical conclusions from investigations into language. This paper concerns one contemporary variation on this theme: the alleged ontological significance of cognitivist truth-theoretic accounts of semantic competence.