I unpack the contents of the motto that “meaning is use” in fivefold fashion and point to the elements it contains, which are open to an ideological exploitation, the main reason for its strong appeal among intellectual circles. I indicate how the sense of it, “where there is use, there is meaning”, has encouraged equalitarian accounts of meaning and truth . I then present and discuss Austin’s distinction between the Sentence and the Statement, which entails the presence of meaning preceding (...) the use, and directing it, and offer a new proof that Sentences are impossible to eliminate in any semantic scheme of things. Austin’s distinction, as explained and defended, refutes the contention that “meaning is use”. I proceed to his doctrine of Locution and Illocution, reflecting the previous, indicating by a series of examples, that illocutionary varieties, which are varieties but not variances , can never extend beyond the se-mantic scope generically contained in the original, content; that is to say, the Sentence. Those that do, and they are several, violate the rules of sense. I enumerate his vast differences with Wittgenstein, and proceed to defend Austin’s noted conservatism against the novelties endorsed by the former and his disciples. Charging Wittgenstein’s private language attack as circular, I conclude by marking their further contrast on the actual foundations of meaning and truth. (shrink)
I explore the idea of language reaching its limits by distinguishing two kinds of limits language may have: The first are “Boundaries” which lie on the edges of language, and distinguish what makes sense from what does not. These, I claim, are suitable in making theoretical generalizations. The second are “Contours,” which lie within language, and allow for contrasting and comparing meanings and shades of meanings that we capture in language. These are more suitable for characterizations of particulars, and for (...) literary use. I claim that failure to draw this distinction is responsible for confusions in Sabina Lovibond’s and Richard Rorty’s views of moral thought and language. (shrink)
In their response to our article (Keestra and Cowley, 2009), Hacker and Bennett charge us with failing to understand the project of their book Philosophical Foundations of Neuroscience (PFN; Bennett and Hacker, 2003) and do this by discussing foundationalism, linguisticconservatism and the passivity of perception. In this rebuttal we explore disagreements that explain the alleged errors. First, we reiterate our substantial disagreement with Bennett and Hacker (B&H) regarding their assumption that, even regarding much debated concepts like ‘consciousness’, (...) we can assume conceptual consensus within a community of competent speakers. Instead, we emphasize variability and divergence between individuals and groups in such contexts. Second, we plead for modesty in conceptual analysis, including the use of conceptual ambiguities as heuristics for the investigation of explanatory mechanisms. Third, we elucidate our proposal by discussing the interdependence of perception and action, which in some cases appear to be problematic for PFN. Fourth, we discuss why our view of conceptual innovation is different from B&H’s, as we plead for linking explanatory ingredients with conceptual analysis. We end by repeating our particular agreement with their mereological principle, even though we present different reasons: psychological concepts should not be applied to mere components or operations of explanatory mechanisms, for which another vocabulary should be developed. (shrink)
One can attack a philosophical claim by identifying a misuse of the language used to state it. I distinguish between two varieties of this strategy: one belonging to Norman Malcolm and the other to Ludwig Wittgenstein. The former is flawed and easily dismissible as misled linguisticconservatism. It muddies the name of ordinary language philosophy. I argue that the latter avoids this flaw. To make perspicuous the kind of criticism of philosophical claims that the second variety makes available, (...) I draw a comparison between Wittgenstein’s recommendation that philosophers study ordinary language and Alfred Schütz’s recommendation that social scientists study the methods of the agents they study. Both do so in an attempt to sensitise philosophers and social scientists respectively to particular artefacts of method which can be easily mistaken for features of that which is studied. (shrink)
I review recent work on Phenomenal Conservatism, the position introduced by Michael Huemer according to which if it seems that P to a subject S, in the absence of defeaters S has thereby some degree of justification for believing P.
Conservatism about perceptual justification tells us that we cannot have perceptual justification to believe p unless we also have justification to believe that perceptual experiences are reliable. There are many ways to maintain this thesis, ways that have not been sufficiently appreciated. Most of these ways lead to at least one of two problems. The first is an over-intellectualization problem, whereas the second problem concerns the satisfaction of the epistemic basing requirement on justified belief. I argue that there is (...) at least one Conservative view that survives both difficulties, a view which has the further ability to undercut a crucial consideration that has supported Dogmatist views about perceptual justification. The final section explores a tension between Conservatism and the prospects of having a completely general account of propositional justification. Ironically, the problem is that Conservatives seem committed to making the acquisition of propositional justification too easy. My partial defense of Conservatism concludes by suggesting possible solutions to this problem. (shrink)
Linguists, particularly in the generative tradition, commonly rely upon intuitions about sentences as a key source of evidence for their theories. While widespread, this methodology has also been controversial. In this paper, I develop a positive account of linguistic intuition, and defend its role in linguistic inquiry. Intuitions qualify as evidence as form of linguistic behavior, which, since it is partially caused by linguistic competence (the object of investigation), can be used to study this competence. I (...) defend this view by meeting two challenges. First, that intuitions are collected through methodologically unsound practices, and second, that intuition cannot distinguish between the contributions of competence and performance systems. (shrink)
Previous studies have shown that object properties are processed faster when they follow properties from the same perceptual modality than properties from different modalities. These findings suggest that language activates sensorimotor processes, which, according to those studies, can only be explained by a modal account of cognition. The current paper shows how a statistical linguistic approach of word co-occurrences can also reliably predict the category of perceptual modality a word belongs to (auditory, olfactory–gustatory, visual–haptic), even though the statistical (...) class='Hi'>linguistic approach is less precise than the modal approach (auditory, gustatory, haptic, olfactory, visual). Moreover, the statistical linguistic approach is compared with the modal embodied approach in an experiment in which participants verify properties that share or shift modalities. Response times suggest that fast responses can best be explained by the linguistic account, whereas slower responses can best be explained by the embodied account. These results provide further evidence for the theory that conceptual processing is both linguistic and embodied, whereby less precise linguistic processes precede precise simulation processes. (shrink)
This paper criticizes phenomenal conservatism––the influential view according to which a subject S’s seeming that P provides S with defeasible justification for believing P. I argue that phenomenal conservatism, if true at all, has a significant limitation: seeming-based justification is elusive because S can easily lose it by just reflecting on her seemings and speculating about their causes––I call this the problem of reflective awareness. Because of this limitation, phenomenal conservatism doesn’t have all the epistemic merits attributed (...) to it by its advocates. If true, phenomenal conservatism would constitute a unified theory of epistemic justification capable of giving everyday epistemic practices a rationale, but it wouldn’t afford us the means of an effective response to the sceptic. Furthermore, phenomenal conservatism couldn’t form the general basis for foundationalism. (shrink)
In “Compassionate Phenomenal Conservatism” (2007), “Phenomenal Conservatism and the Internalist Intuition” (2006), and Skepticism and the Veil of Perception (2001), Michael Huemer endorses the principle of phenomenal conservatism, according to which appearances or seemings constitute a fundamental source of (defeasible) justification for belief. He claims that those who deny phenomenal conservatism, including classical foundationalists, are in a self-defeating position, for their views cannot be both true and justified; that classical foundationalists have difficulty accommodating false introspective beliefs; (...) and that phenomenal conservatism is most faithful to the central internalist intuition. I argue that Huemer’s self-defeat argument fails, that classical foundationalism is able to accommodate fallible introspective beliefs, and that classical foundationalism captures a relatively clear internalist intuition. I also show that the motivation for phenomenal conservatism is less than clear. (shrink)
In this paper we argue that Michael Huemer’s phenomenal conservatism—the internalist view according to which our beliefs are prima facie justified if based on how things seems or appears to us to be—doesn’t fall afoul of Michael Bergmann’s dilemma for epistemological internalism. We start by showing that the thought experiment that Bergmann adduces to conclude that is vulnerable to his dilemma misses its target. After that, we distinguish between two ways in which a mental state can contribute to the (...) justification of a belief: the direct way and the indirect way. We identify a straightforward reason for claiming that the justification contributed indirectly is subject to Bergmann’s dilemma. Then we show that the same reason doesn’t extend to the claim that the justification contributed directly is subject to Bergmann’s dilemma. As is the view that seemings or appearances contribute justification directly, we infer that Bergmann’s contention that his dilemma applies to is unmotivated. In the final part, we suggest that our line of response to Bergmann can be used to shield other types of internalist justification from Bergmann’s objection. We also propose that seeming-grounded justification can be combined with justification of one of these types to form the basis of a promising version of internalist foundationalism. (shrink)
Phenomenal conservatism holds, roughly, that if it seems to S that P, then S has evidence for P. I argue for two main conclusions. The first is that phenomenal conservatism is better suited than is proper functionalism to explain how a particular type of religious belief formation can lead to non-inferentially justified religious beliefs. The second is that phenomenal conservatism makes evidence so easy to obtain that the truth of evidentialism would not be a significant obstacle to (...) justified religious belief. A natural objection to phenomenal conservatism is that it makes evidence too easy to obtain, but I argue this objection is mistaken. (shrink)
Huemer defends phenomenal conservatism (PC) and also the further claim that belief in any rival theory is self-defeating (SD). Here I construct a dilemma for his position: either PC and SD are incompatible, or belief in PC is itself self-defeating. I take these considerations to suggest a better self-defeat argument for (belief in) PC and a strong form of internalism.
An English double-embedded relative clause from which the middle verb is omitted can often be processed more easily than its grammatical counterpart, a phenomenon known as the grammaticality illusion. This effect has been found to be reversed in German, suggesting that the illusion is language specific rather than a consequence of universal working memory constraints. We present results from three self-paced reading experiments which show that Dutch native speakers also do not show the grammaticality illusion in Dutch, whereas both German (...) and Dutch native speakers do show the illusion when reading English sentences. These findings provide evidence against working memory constraints as an explanation for the observed effect in English. We propose an alternative account based on the statistical patterns of the languages involved. In support of this alternative, a single recurrent neural network model that is trained on both Dutch and English sentences is shown to predict the cross-linguistic difference in the grammaticality effect. (shrink)
In this paper, I argue that Phenomenal Conservatism (PC) is not superior to alternative theories of basic propositional justification insofar as those theories that reject PC are self-defeating. I show that self-defeat arguments similar to Michael Huemer’s Self-Defeat Argument for PC can be constructed for other theories of basic propositional justification as well. If this is correct, then there is nothing special about PC in that respect. In other words, if self-defeat arguments can be advanced in support of alternatives (...) to PC, then Huemer’s Self-Defeat argument doesn’t uniquely motivate PC. (shrink)
Is linguistic understanding a form of knowledge? I clarify the question and then consider two natural forms a positive answer might take. I argue that, although some recent arguments fail to decide the issue, neither positive answer should be accepted. The aim is not yet to foreclose on the view that linguistic understanding is a form of knowledge, but to develop desiderata on a satisfactory successor to the two natural views rejected here.
Recently there has been a good deal of interest in the relationship between common sense epistemology and Skeptical Theism. Much of the debate has focused on Phenomenal Conservatism and any tension that there might be between it and Skeptical Theism. In this paper I further defend the claim that there is no tension between Phenomenal Conservatism and Skeptical Theism. I show the compatibility of these two views by coupling them with an account of defeat – one that is (...) friendly to both Phenomenal Conservatism and Skeptical Theism. In addition, I argue that this account of defeat can give the Skeptical Theist what she wants – namely a response to the evidential argument from evil that can leave one of its premises unmotivated. In giving this account I also respond to several objections from Trent Dougherty (2011) and Chris Tucker (this volume) as well as to an additional worry coming from the epistemology of disagreement. (shrink)
Inspired by the success of generative linguistics and transformational grammar, proponents of the linguistic analogy (LA) in moral psychology hypothesize that careful attention to folk-moral judgments is likely to reveal a small set of implicit rules and structures responsible for the ubiquitous and apparently unbounded capacity for making moral judgments. As a theoretical hypothesis, LA thus requires a rich description of the computational structures that underlie mature moral judgments, an account of the acquisition and development of these structures, and (...) an analysis of those components of the moral system that are uniquely human and uniquely moral. In this paper we present the theoretical motivations for adopting LA in the study of moral cognition: (a) the distinction between competence and performance, (b) poverty of stimulus considerations, and (c) adopting the computational level as the proper level of analysis for the empirical study of moral judgment. With these motivations in hand, we review recent empirical findings that have been inspired by LA and which provide evidence for at least two predictions of LA: (a) the computational processes responsible for folk-moral judgment operate over structured representations of actions and events, as well as coding for features of agency and outcomes; and (b) folk-moral judgments are the output of a dedicated moral faculty and are largely immune to the effects of context. In addition, we highlight the complexity of the interfaces between the moral faculty and other cognitive systems external to it (e.g., number systems). We conclude by reviewing the potential utility of the theoretical and empirical tools of LA for future research in moral psychology. (shrink)
Recently, Michael Huemer has defended the Principle of Phenomenal Conservatism: If it seems to S that p, then, in the absence of defeaters, S thereby has at least some degree of justification for believing that p. This principle has potentially far-reaching implications. Huemer uses it to argue against skepticism and to defend a version of ethical intuitionism. I employ a reductio to show that PC is false. If PC is true, beliefs can yield justification for believing their contents in (...) cases where, intuitively, they should not be able to do so. I argue that there are cases where a belief that p can behave like an appearance that p and thereby make it seem to one that p. (shrink)
In an intriguing essay, G. A. Cohen has defended a conservative bias in favour of existing value. In this paper, we consider whether Cohen’s conservatism raises a new challenge to the use of human enhancement technologies. We develop some of Cohen’s suggestive remarks into a new line of argument against human enhancement that, we believe, is in several ways superior to existing objections. However, we shall argue that on closer inspection, Cohen’s conservatism fails to offer grounds for a (...) strong sweeping objection to enhancement, and may even offer positive support for forms of enhancement that preserve valuable features of human beings. Nevertheless, we concede that Cohen’s arguments may suggest some plausible and important constraints on the modality of legitimate and desirable enhancements. (shrink)
The debate over the merits of originalism has advanced considerably in recent years, both in terms of its intellectual sophistication and its practical significance. In the process, some prominent originalists—Lawrence Solum and Jeffrey Goldsworthy being the two discussed here—have been at pains to separate out the linguistic and normative components of the theory. For these authors, while it is true that judges and other legal decision-makers ought to be originalists, it is also true that the communicated content of the (...) constitution is its original meaning. That is to say: the meaning is what it is, not what it should be. Accordingly, there is no sense in which the communicated content of the constitution is determined by reference to moral desiderata; linguistic desiderata do all the work. In this article, I beg to differ. In advancing their arguments for linguistic originalism, both authors rely upon the notion of successful communications conditions. In doing so they implicitly open up the door for moral desiderata to play a role in determining the original communicated content. This undercuts their claim and changes considerably the dialectical role of linguistic originalism in the debate over constitutional interpretation. (shrink)
Against Hanna on Phenomenal Conservatism Content Type Journal Article Pages 1-10 DOI 10.1007/s12136-012-0148-2 Authors Kevin McCain, Department of Philosophy, University of Rochester, Box 270078, Rochester, NY 14627-0078, USA Journal Acta Analytica Online ISSN 1874-6349 Print ISSN 0353-5150.
In this paper I will propose a simple linguistic approach to the Knobe effect, or the moral asymmetry of intention attribution in general, which is just to ask the felicity judgments on the relevant sentences without any vignette at all. Through this approach I was in fact able to reproduce the (quasi-)Knobe effects in different languages (English and Japanese), with large effect sizes. I shall defend the significance of this simple approach by arguing that our approach and its results (...) not only tell interesting facts about the concept of intentional action, but also show the existence of the linguistic default, which requires independent investigation. I will then argue that, despite the recent view on experimental philosophy by Knobe himself, there is a legitimate role of the empirical study of concepts in the investigations of cognitive processes in experimental philosophy, which suggests a broadly supplementary picture of experimental philosophy today. (shrink)
John DePoe has criticized the self-defeat argument for Phenomenal Conservatism. He argues that acquaintance, rather than appearance, may form the basis for non-inferentially justified beliefs, and that Phenomenal Conservatism conflicts with a central motivation for internalism. I explain how Phenomenal Conservatism and the self-defeat argument may survive these challenges.
The main task in this paper is to detail and investigate Carnap’s conception of a “linguistic framework”. On this basis, we will see whether Carnap’s dichotomies, such as the analytic-synthetic distinction, are to be construed as absolute/fundamental dichotomies or merely as relative dichotomies. I argue for a novel interpretation of Carnap’s conception of a LF and, on that basis, will show that, according to Carnap, all the dichotomies to be discussed are relative dichotomies; they depend on conventional decisions concerning (...) the logical syntax of LF. Thus, all of the dichotomies directly hinge on the conception of the LF. The LF’s logical structure, in turn, is an immediate consequence of adopting the linguistic doctrine of logical truths. As we will see, no appeal to any of these distinctions is necessary in establishing a LF and all of its components. I will also draw attention to the differences between what Carnap labels a “way of speaking”, “language”, and “artificial language”. Consequently, I will briefly conclude that none of Quine’s major objections address the main points of Carnap’s theory. (shrink)
In this paper, I respond to Michael Huemer’s reply to my objection against Phenomenal Conservatism (PC). I have argued that Huemer’s Self-defeat Argument for PC does not favor PC over competing theories of basic propositional justification, since analogous self-defeat arguments can be constructed for competing theories. Huemer responds that such analogous self-defeat arguments are unsound. In this paper, I argue that Huemer’s reply does not save his Self-defeat Argument for PC from my original objection.
In this article, we explore whether cross-linguistic differences in grammatical aspect encoding may give rise to differences in memory and cognition. We compared native speakers of two languages that encode aspect differently (English and Swedish) in four tasks that examined verbal descriptions of stimuli, online triads matching, and memory-based triads matching with and without verbal interference. Results showed between-group differences in verbal descriptions and in memory-based triads matching. However, no differences were found in online triads matching and in memory-based (...) triads matching with verbal interference. These findings need to be interpreted in the context of the overall pattern of performance, which indicated that both groups based their similarity judgments on common perceptual characteristics of motion events. These results show for the first time a cross-linguistic difference in memory as a function of differences in grammatical aspect encoding, but they also contribute to the emerging view that language fine tunes rather than shapes perceptual processes that are likely to be universal and unchanging. (shrink)
Alice Crary claims that “the standard view of the bearing of Wittgenstein's philosophy on ethics” is dominated by “inviolability interpretations”, which often underlie conservative readings of Wittgenstein. Crary says that such interpretations are “especially marked in connection with On Certainty”, where Wittgenstein is represented as holding that “our linguistic practices are immune to rational criticism, or inviolable”. Crary's own conception of the bearing of Wittgenstein's philosophy on ethics, which I call the “intrinsically-ethical reading”, derives from the influential New Wittgenstein (...) school of exegesis, and is also espoused by James Edwards, Cora Diamond, and Stephen Mulhall. To my eyes, intrinsically-ethical readings present a peculiar picture of ethics, which I endeavour to expose in Part I of the paper. In Part II I present a reading of On Certainty that Crary would call an “inviolability interpretation”, defend it against New Wittgensteinian critiques, and show that this kind of reading has nothing to do with ethical or political conservatism. I go on to show how Wittgenstein's observations on the manner in which we can neither question nor affirm certain states of affairs that are fundamental to our epistemic practices can be fruitfully extended to ethics. Doing so sheds light on the phenomenon that I call “basic moral certainty”, which constitutes the foundation of our ethical practices, and the scaffolding or framework of moral perception, inquiry, and judgement. The nature and significance of basic moral certainty will be illustrated through consideration of the strangeness of philosophers' attempts at explaining the wrongness of killing. (shrink)
The creative aspect of language use provides a set of phenomena that a science of language must explain. It is the “central fact to which any signi- ficant linguistic theory must address itself” and thus “a theory of language that neglects this ‘creative’ aspect is of only marginal interest” (Chomsky 1964: 7–8). Therefore, the form and explanatory depth of linguistic science is restricted in accordance with this aspect of language. In this paper, the implications of the creative aspect (...) of language use for a scientific theory of language will be discussed, noting the possible further implications for a science of the mind. It will be argued that a corollary of the creative aspect of language use is that a science of language can study the mechanisms that make language use possible, but that such a science cannot explain how these mechanisms enter into human action in the form of language use. (shrink)
This paper focuses on the linguistic evidence base provided by proponents of conceptualism (e.g., Chomsky) and rational realism (e.g., Katz) and challenges some of the arguments alleging that the evidence allowed by conceptualists is superior to that of rational realists. Three points support this challenge. First, neither conceptualists nor realists are in a position to offer direct evidence. This challenges the conceptualists’ claim that their evidence is inherently superior. Differences between the kinds of available indirect evidence will be discussed. (...) Second, at least some of the empirical evidence provided by the conceptualist is flawed. It is not obtained independently of theoretical commitments, alternative interpretations have not been ruled out, and some of the thought experiments intended to extend the evidence base are conceptually flawed. Third, the widely held assumption that rational realism disallows empirical evidence relevant to linguistics is dubious. It will be shown that the limitation imposed by rational realism concerns strictly formal linguistics. The rationalist realist has no reason to impose any restriction on the evidence relevant to psycholinguistics. I conclude that it is a mistake to dismiss realism based on the assumption that it imposes undue restrictions on evidence that is relevant to linguistics. (shrink)
In this paper the cognitive, cultural, and linguistic bases for a pattern of conventionalization of two types of iconic handshapes are described. Work on sign languages has shown that handling handshapes and object handshapes express an agentive/non-agentive semantic distinction in many sign languages. H-HSs are used in agentive event descriptions and O-HSs are used in non-agentive event descriptions. In this work, American Sign Language and Italian Sign Language productions are compared as well as the corresponding groups of gesturers in (...) each country using “silent gesture.” While the gesture groups, in general, did not employ an H-HS/O-HS distinction, all participants used iconic handshapes more often in agentive than in no-agent event descriptions; moreover, none of the subjects produced an opposite pattern than the expected one . These effects are argued to be grounded in cognition. In addition, some individual gesturers were observed to produce the H-HS/O-HS opposition for agentive and non-agentive event descriptions—that is, more Italian than American adult gesturers. This effect is argued to be grounded in culture. Finally, the agentive/non-agentive handshape opposition is confirmed for signers of ASL and LIS, but previously unreported cross-linguistic differences were also found across both adult and child sign groups. It is, therefore, concluded that cognitive, cultural, and linguistic factors contribute to the conventionalization of this distinction of handshape type. (shrink)
In this paper, I outline a reductio against Phenomenal Conservatism. If sound, this reductio shows that the phenomenal conservative is committed to the claim that appealing to appearances is not a trustworthy method of fixing belief.
Many economists and philosophers assume that status quo bias is necessarily irrational. I argue that, in some cases, status quo bias is fully rational. I discuss the rationality of status quo bias on both subjective and objective theories of the rationality of preferences. I argue that subjective theories cannot plausibly condemn this bias as irrational. I then discuss one kind of objective theory, which holds that a conservative bias toward existing things of value is rational. This account can fruitfully explain (...) some compelling aspects of common sense morality, and it may justify status quo bias. (shrink)
Questions about the relationship between linguistic competence and expertise will be examined in the paper. Harry Collins and others distinguish between ubiquitous and esoteric expertise. Collins places considerable weight on the argument that ordinary linguistic competence and related phenomena exhibit a high degree of expertise. His position and ones which share close affinities are methodologically problematic. These difficulties matter because there is continued and systematic disagreement over appropriate methodologies for the empirical study of expertise. Against Collins, it will (...) be argued that the term ‘expertise’ should be reserved for expertise (esoteric experts) and exclude everyday performance (ubiquitous experts). Wittgensteinian ideas will be employed to maintain that it is mistaken and misleading to derive substantive conclusions about the epistemology of expertise from ordinary linguistic competence and vice versa. Significant attention will be devoted to the notion of following a rule with particular reference to the intelligibility of tacit rule following. A satisfactory theoretical approach to expertise should not involve making important claims about ordinary linguistic competence. (shrink)
For some years now, Michael Bergmann has urged a dilemma against internalist theories of epistemic justification. For reasons I explain below, some epistemologists have thought that Michael Huemer’s principle of Phenomenal Conservatism can split the horns of Bergmann’s dilemma. Bergmann has recently argued, however, that PC must inevitably, like all other internalist views, fall prey to his dilemma. In this paper, I explain the nature of Bergmann’s dilemma and his reasons for thinking that PC cannot escape it before arguing (...) that he is mistaken: PC can indeed split its horns. (shrink)
This paper presents a simple model to estimate the number of languages that existed throughout history, and considers philosophical and linguistic implications of the findings. The estimated number is 150,000 plus or minus 50,000. Because only few of those remain, and there is no reason to believe that that remainder is a statistically representative sample, we should be very cautious about universalistic claims based on existing linguistic variation.
Phenomenal conservatism is a popular theory of epistemic justification. Despite its popularity and the fact that some think that phenomenal conservatism can provide a complete account of justification, it faces several challenges. Among these challenges are the need to provide accounts of defeaters and inferential justification. Fortunately, there is hope for phenomenal conservatism. Explanationism, the view on which justification is a matter of explanatory considerations, can help phenomenal conservatism with both of these challenges. The resulting view (...) is one that respects the internalist character of phenomenal conservatism and its motivating intuitions while providing an intuitive and elegant account of both inferential justification and the justificatory impact of defeaters. (shrink)
It is a common wisdom that linguistic communication is different from linguistic understanding. However, the distinction between communication and understanding is not as clear as it seems to be. It is argued that the relationship between linguistic communication and understanding depends upon the notions of understanding and communication involved. Thinking along the line of propositional understanding and informative communication, communication can be reduced to mutual understanding. In contrast, operating along the line of hermeneutic understanding and dialogical communication, (...) the process of understanding is in essence a process of communication. However, dialogical communication should not be confused with propositional understanding. Conversely, hermeneutic understanding should not be confused with informative communication either. The former is dialogical in nature while the latter is monological. (shrink)
According to a family of views under the label of epistemic conservatism, the fact that one already believes something can make it rational to continue to believe it. A number of philosophers have found conservatism attractive, but traditional views are vulnerable to several powerful criticisms. In this paper, I develop an alternative to standard views by identifying a widespread assumption shared by conservatives and their critics - that rational norms govern states of mind like belief, and showing how (...) rejecting this assumption in favor of a process-oriented approach opens the door to a new, dynamic form of conservatism which preserves its core motivations while avoiding its traditional objections. (shrink)
Daniel Hutto’s Enactive account of social cognition maintains that pre- and non-linguistic interactions do not require that the participants represent the psychological states of the other. This goes against traditional ‘cognitivist’ accounts of these social phenomena. This essay examines Hutto’s Enactive account, and proposes two challenges. The account maintains that organisms respond to the behaviours of others, and in doing so respond to the ‘intentional attitude’ which the other has. The first challenge argues that there is no adequate account (...) of how the organisms respond to the correct aspect of the behaviour in each situation. The second challenge argues that the Enactive account cannot account for the flexibility of pre- and non-linguistic responses to others. The essay concludes that these challenges provide more than sufficient reason to doubt the viability of Hutto’s account as an alternative to cogntivist approaches to social cognition. (shrink)
The present paper aims to revisit the virtues and disadvantages of epistemic conservatism, which claims that it is rational to adhere to a belief until there is evidence to the contrary. Two main theses are put forward: first, while conservatism presents several epistemological flaws, from a contextualist point of view it is not only desirable but also is essential to knowledge accumulation in everyday life; second, conservatism provides a solution to sceptical challenges and to the problem of (...) easy knowledge. (shrink)
In this paper I examine a contemporary debate about the general notion of linguistic rules and the place of context in determining meaning, which has arisen in the wake of a challenge that the conceptual framework of moral particularism has brought to the table. My aim is to show that particularism in the theory of meaning yields an attractive model of linguistic competence that stands as a genuine alternative to other use-oriented but still generalist accounts that allow room (...) for context-sensitivity in deciding how the linguistic rules would apply in concrete cases. I argue that the ideas developed in relation to particularism in meta-ethics illuminate a difficulty with the modest generalist view, one that can be resolved by adopting semantic particularism instead. (shrink)
Credal Conservatism says that an agent’s credal states should be conserved as far as possible when she undergoes a learning experience. Uniqueness says that for any given total evidence, there is a unique credal state that any agent with that total evidence should have. Epistemic Impartiality is the idea that there are no significant differences between intrapersonal and interpersonal rationality requirements when determining what credal states one ought to have for purposes of epistemic evaluation. I construe Epistemic Impartiality as (...) a meta-principle governing epistemic norms, and argue that it is compatible with Conservatism. Then I show that on the assumption of Epistemic Impartiality, Credal Conservatism is equivalent to Uniqueness. (shrink)
The paper considers a version of the problem of linguistic creativity obtained by interpreting attributions of ordinary semantic knowledge as attributions of practical competencies with expressions. The paper explains how to cope with this version of the problem without invoking either compositional theories of meaning or the notion of `tacit knowledge' (of such theories) that has led to unnecessary puzzlement. The central idea is to show that the core assumption used to raise the problem is false. To render precise (...) argument possible, the paper first identifies and removes some relevant semantic indeterminacy in philosophical talk of `semantic knowledge' and `information'. This yields rules for attributing the two to human speakers and information-processors, respectively. The paper then shows, first, that ordinary speakers qualify as possessing all along an other than finite and definite stock of semantic knowledge and, second, that a very simple information-processor running a procedural semantics qualifies as possessing an analogous stock of semantic information. The second result is used to bring out that the first is neither unduly impressive nor particularly puzzling. (shrink)
Crispin Wright has advanced a number of arguments to show that, in addition to evidential warrant, we have a species of non-evidential warrant, namely, “entitlement”, which forms the basis of a particular view of the architecture of perceptual justification known as “epistemic conservatism”. It is widely known, however, that Wright's conservative view is beset by a number of problems. In this article, I shall argue that the kind of warrant that emerges from Wright's account is not the standard truth-conducive (...) justification, but what is known as the deontological conception of justification. It will be argued that the deontological justification has features that make it a better candidate for representing a conservative architecture. These results will be reinforced by showing how the deontological framework can make better sense of a recent theory of justified belief that takes its inspiration from Wright's conservative account. Thus understood, we may see the liberalism–conservatism controversy as actually an extension of the older debate over which conception of justification, truth-conducive or deontological, can best represent the epistemic status of our belief-forming practices. (shrink)
Michael Dummett famously holds that the “philosophy of thought” must proceed via the philosophy of language, since that is the only way to preserve the objectivity of thoughts while avoiding commitments to “mythological,” Platonic entities. Central to Dummett’s case is his thesis that all thought contents are linguistically expressible. In this paper, I will (a) argue that making the linguistic turn is neither necessary nor sufficient to avoid the problems of psychologism, (b) discuss Wayne Martin’s argument that not all (...) thought-contents are linguistically communicable, and (c) present another, stronger argument, derived from Husserl’s early account of fulfillment, that establishes the same conclusion. (shrink)
The aim of this paper is to highlight an individualist streak in both Davidson’s conception of language and Chomsky’s. In the first part of the paper, I argue that in Davidson’s case this individualist streak is a consequence of an excessively strong conception of what the compositional nature of linguistic meaning requires, and I offer a weaker conception of that requirement that can do justice to both the publicity and the compositionality of language. In the second part of the (...) paper, I offer a comparison between Davidson’s position on the unreality of public languages, and Chomsky’s position regarding the epiphenomenal status of “externalized” languages. In Chomsky’s case, as in Davidson’s, languages are individuated in terms of the formal theories that serve to account for their systematic structure, and this assumption rests upon a similarly strong and similarly questionable understanding of what it is to employ finite means in pursuit of an infinite task. The alternative, at which I can only hint, is a view of language as a social and historical reality, i.e., a realm of social fact that cannot be exhausted by any formal theory and cannot be reduced to properties of individual speakers. (shrink)
There is widespread agreement about a combination of attributes that someone needs to possess if they are to be counted as a conservative. They need to lack definite political ideals, goals or ends, to prefer the political status quo to its alternatives, and to be risk averse. Why should these three highly distinct attributes, which are widely believed to be characteristic of adherents to a significant political position, cluster together? Here I draw on prospect theory to develop an explanation for (...) the clustering of attributes that is characteristic of conservatives. I argue that a lack of political ideals is the underlying driver of conservatism. I will provide reason to believe that people who lack political ideals are disposed to prefer the political status quo to its alternatives; and reason to believe that people who prefer the political status quo to its alternatives are disposed to be risk averse, at least with respect to significantly many of the risks that arise in the social and political domain. I also consider and reject some other potential explanations for the clustering of attributes that is characteristic of conservatives and sketch some policy implications that follow from the explanation I develop. (shrink)