Are all instances of the T-schema assertable? I argue that they are not. The reason is the presence of conventionalimplicature in a language. Conventionalimplicature is meant to be a component of the rule-based content that a sentence can have, but it makes no contribution to the sentence's truth-conditions. One might think that a conventionalimplicature is like a force operator. But it is not, since it can enter into the scope of logical (...) operators. It follows that the semantic content of a sentence is not given simply by its truth-conditional content. So not all instances of the T-schema are assertable in the relevant sense. Consequently, there is a strong case to be made against truth-conditional semantics of the disquotational variety and deflationism about truth. (shrink)
We define a notion of projective meaning which encompasses both classical presuppositions and phenomena which are usually regarded as non-presuppositional but which also display projection behavior—Horn’s assertorically inert entailments, conventional implicatures (both Grice’s and Potts’) and some conversational implicatures. We argue that the central feature of all projective meanings is that they are not-at-issue, defined as a relation to the question under discussion. Other properties differentiate various sub-classes of projective meanings, one of them the class of presuppositions according to (...) Stalnaker. This principled taxonomy predicts differences in behavior unexpected on other models among the various conventional triggers and conversational implicatures, while holding promise for a general, explanatory account of projection which applies to all the types of meanings considered. (shrink)
This article presents evidence that individual words and phrases can contribute multiple independent pieces of meaning simultaneously. Such multidimensionality is a unifying theme of the literature on conventional implicatures and expressives. I use phenomena from discourse, semantic composition, and morphosyntax to detect and explore various dimensions of meaning. I also argue that, while the meanings involved are semantically independent, they interact pragmatically to reduce underspecification and fuel pragmatic enrichment. In this article, the central case studies are appositives like Falk, (...) the CEO, and the taboo intensive damn, though discourse particles and connectives like but, even, and still play supporting roles. The primary evidence, both quantitative and qualitative, is drawn from large interview and product-review corpora, which harbor a wealth of information about the importance of these items to discourse. (shrink)
Grice’s distinction between what is said and what is implicated has greatly clarified our understanding of the boundary between semantics and pragmatics. Although border disputes still arise and there are certain difficulties with the distinction itself (see the end of §1), it is generally understood that what is said falls on the semantic side and what is implicated on the pragmatic side. But this applies only to what is..
This paper provides a semantic analysis of English rise-fall-rise (RFR) intonation as a focus quantifier over assertable alternative propositions. I locate RFR meaning in the conventionalimplicature dimension, and propose that its effect is calculated late within a dynamic model. With a minimum of machinery, this account captures disambiguation and scalar effects, as well as interactions with other focus operators like ‘only’ and clefts. Double focus data further support the analysis, and lead to a rejection of Ward and (...) Hirschberg’s (Language 61:747–776, 1985) claim that RFR never disambiguates. Finally, I draw out connections between RFR and contrastive topic (CT) intonation (Büring, Linguist Philos 26:511–545, 2003), and show that RFR cannot simply be reduced to a sub-case of CT. (shrink)
I argue that conventional implicatures embed in logical compounds, and are non-truth-conditional contributors to sentence meaning. This, I argue has significant implications for how we understand truth, truth-conditional content, and truth-bearers.
We review Potts’ influential book on the semantics of conventionalimplicature (CI), offering an explication of his technical apparatus and drawing out the proposal’s implications, focusing on the class of CIs he calls supplements. While we applaud many facets of this work, we argue that careful considerations of the pragmatics of CIs will be required in order to yield an empirically and explanatorily adequate account.
In this paper, I defend against a number of criticisms an account of slurs, according to which the same semantic content is expressed in the use of a slur (e.g. 'chink') as is expressed in the use of its neutral counterpart (e.g. 'Chinese'), while in addition the use of a slur conventionally implicates a negative, derogatory attitude. Along the way, I criticise competing accounts of the semantics and pragmatics of slurs, namely, Hom's 'combinatorial externalism' and Anderson and Lepore's 'prohibitionism'.
The history of conventional implicatures is rocky, their current status uncertain. So it seems wise to return to their source and start afresh, with an open-minded reading of the original definition (Grice 1975) and an eye open for novel factual support. Suppose the textbook examples (therefore, even, but and its synonyms) disappeared. Where would conventional implicatures be then? This book’s primary descriptive claim is that they would still enjoy widespread factual support. I match this with a theoretical proposal: (...) if we move just a few years forward from the genesis of CIs, we find in Karttunen and Peters’ (1979) multidimensional semantics the basis for an ideal description logic. (shrink)
H. P. Grice virtually discovered the phenomenon of implicature (to denote the implications of an utterance that are not strictly implied by its content). Gricean theory claims that conversational implicatures can be explained and predicted using general psycho-social principles. This theory has established itself as one of the orthodoxes in the philosophy of language. Wayne Davis argues controversially that Gricean theory does not work. He shows that any principle-based theory understates both the intentionality of what a speaker implicates and (...) the conventionality of what a sentence implicates. In developing his argument the author explains that the psycho-social principles actually define the social function of implicature conventions, which contribute to the satisfaction of those principles. This challenging book will be of importance to philosophers of language and linguists, especially those working in pragmatics and sociolinguistics. (shrink)
[First Paragraph] In his recent book, Implicature: Intention, Convention, and Principle in the Failure of Gricean Theory (1998), Wayne Davis argues that the Gricean approach to conversational implicature is bankrupt and offers a new approach of his own. Although I disagree with Davis both in general and in detail, I think nonetheless that the problems he raises'or close relatives of them-- are serious and important problems which should give any Gricean pause. This is an extremely worthwhile book, (...) even for those who disagree with it. (shrink)
In everyday conversations we often convey information that goes above and beyond what we strictly speaking say: exaggeration and irony are obvious examples. H.P. Grice introduced the technical notion of a conversational implicature in systematizing the phenomenon of meaning one thing by saying something else. In introducing the notion, Grice drew a line between what is said, which he understood as being closely related to the conventional meaning of the words uttered, and what is conversationally implicated, which can (...) be inferred from the fact that an utterance has been made in context. Since Grice’s seminal work, conversational implicatures have become one of the major research areas in pragmatics. This article introduces the notion of a conversational implicature, discusses some of the key issues that lie at the heart of the recent debate, and explicates tests that allow us to reliably distinguish between semantic entailments and conventional implicatures on the one hand and conversational implicatures on the other. (shrink)
Moral assertions express attitudes, but it is unclear how. This paper examines proposals by David Copp, Stephen Barker, and myself that moral attitudes are expressed as implicature (Grice), and Copp's and Barker's claim that this supports expressivism about moral speech acts. I reject this claim on the ground that implicatures of attitude are more plausibly conversational than conventional. I argue that Copp's and my own relational theory of moral assertions is superior to the indexical theory offered by Barker (...) and Jamie Dreier, and that since the relational theory supports conversational implicatures of attitude, expressive conventions would be redundant. Furthermore, moral expressions of attitude behave like conversational and not conventional implicatures, and there are reasons for doubting that conventions of the suggested kind could exist. (shrink)
The idea is that, in a wide range of contexts, utterances of the sentences in (a) in each case will communicate the assumption in (b) in each case (or something closely akin to it, there being a certain amount of contextually governed variation in the speaker's propositional attitude and so the scope of the negation). These scalar inferences are taken to be one kind of (generalized) conversational implicature. As is the case with pragmatic inference quite generally, these inferences are (...) defeasible (cancellable), which distinguishes them from entailments, and they are nondetachable, which distinguishes them from conventional implicatures. The core idea is that the choice of a weaker element from a scale of elements ordered in terms of semantic strength (that is, numbers of entailments) tends to implicate that, as far as the speaker knows, none of the stronger elements in the scale holds in this instance. The pattern is quite clear in (1) and (2), where the weak/strong alternatives are some/all and five/six respectively. In the case of (3), the stronger expression must be intelligent and good-hearted which entails intelligent; what Y's utterance implicates is that Mary does not have the two properties: intelligence and good-heartedness, so that, given the proposition expressed (Mary is intelligent) it follows, deductively, that she is not good-hearted, in Y's opinion. The example in (4) involves a scale inversion due to the negation, so that the weak/strong alternatives are not necessarily/not possibly; the negation which the scalar inference generates creates a double negation, which is eliminated giving possibly. (shrink)
Paul Grice’s theory of Conversational Implicature is, by all accounts, one of the great achievements of the past fifty years -- both of analytic philosophy and of the empirical study of language. Its guiding idea is that constraints on the use of sentences, and information conveyed by utterances of them, arise not only from their conventional meanings (the information they semantically encode) but also from the communicative uses to which they are put. In his view, the overriding goal (...) of most forms of communication is the cooperative exchange of information -- the pursuit of which generates norms for its rational and efficient achievement. Among them are Grice’s conversational maxims. (shrink)
Commonsense suggests that moral judgements and conventional normative judgements are importantly different in kind. Yet a compelling vindicating account of the moral/conventional distinction has proven persistently elusive. The distinction is typically explicated in terms of either formal properties (the Form View) or substantive properties (the Content View) of the principles that figure in the judgements. But the most promising versions of these views face serious difficulties. After reviewing the difficulties with the standard accounts, I propose a new way (...) of explicating the moral/conventional distinction in terms of the role that social practices play in grounding the judgements (the Grounds View). (shrink)
A platitude questioned by many Buddhist thinkers in India and Tibet is the existence of the world. We might be tempted to insert some modifier here, such as “substantial,” “self-existent,” or “intrinsically existent,” for, one might argue, these thinkers did not want to question the existence of the world tout court but only that of a substantial, self-existent, or otherwise suitably qualified world. But perhaps these modifiers are not as important as is generally thought, for the understanding of the world (...) questioned is very much the understanding of the world everybody has. It is the understanding that there is a world out there —independent of our minds — and that when we speak and think about this world we mostly get it right. But the Madhyamaka thinkers under discussion here deny that there is a world out there and claim that our opinions about it are to the greatest part fundamentally and dangerously wrong. (shrink)
One of the most prevalent and influential assumptions in metaethics is that our conception of the relation between moral language and motivation provides strong support to internalism about moral judgments. In the present paper, I argue that this supposition is unfounded. Our responses to the type of thought experiments that internalists employ do not lend confirmation to this view to the extent they are assumed to do. In particular, they are as readily explained by an externalist view according to which (...) there is a pragmatic and standardized connection between moral utterances and motivation. The pragmatic account I propose states that a person’s utterance of a sentence according to which she ought to ϕ conveys two things: the sentence expresses, in virtue of its conventional meaning, the belief that she ought to ϕ, and her utterance carries a generalized conversational implicature to the effect that she is motivated to ϕ. This view also makes it possible to defend cognitivism against a well-known internalist argument. (shrink)
We experience resistance when we are engaging with fictional works which present certain (for example, morally objectionable) claims. But in virtue of what properties do sentences trigger this ‘imaginative resistance’? I argue that while most accounts of imaginative resistance have looked for semantic properties in virtue of which sentences trigger it, this is unlikely to give us a coherent account, because imaginative resistance is a pragmatic phenomenon. It works in a way very similar to Paul Grice's widely analysed ‘conversational (...) class='Hi'>implicature’. (shrink)
It has been generally assumed that certain categories of numerical expressions, such as ‘more than n’, ‘at least n’, and ‘fewer than n’, systematically fail to give rise to scalar implicatures in unembedded declarative contexts. Various proposals have been developed to explain this perceived absence. In this paper, we consider the relevance of scale granularity to scalar implicature, and make two novel predictions: first, that scalar implicatures are in fact available from these numerical expressions at the appropriate granularity level, (...) and second, that these implicatures are attenuated if the numeral has been previously mentioned or is otherwise salient in the context. We present novel experimental data in support of both of these predictions, and discuss the implications of this for recent accounts of numerical quantifier usage. (shrink)
Is language understanding a special case of social cognition? To help evaluate this view, we can formalize it as the rational speech-act theory: Listeners assume that speakers choose their utterances approximately optimally, and listeners interpret an utterance by using Bayesian inference to “invert” this model of the speaker. We apply this framework to model scalar implicature (“some” implies “not all,” and “N” implies “not more than N”). This model predicts an interaction between the speaker's knowledge state and the listener's (...) interpretation. We test these predictions in two experiments and find good fit between model predictions and human judgments. (shrink)
The doctrine of the two truths - a conventional truth and an ultimate truth - is central to Buddhist metaphysics and epistemology. The two truths (or two realities), the distinction between them, and the relation between them is understood variously in different Buddhist schools; it is of special importance to the Madhyamaka school. One theory is articulated with particular force by Nagarjuna (2nd ct CE) who famously claims that the two truths are identical to one another and yet distinct. (...) One of the most influential interpretations of Nagarjuna's difficult doctrine derives from the commentary of Candrakirti (6th ct CE). In view of its special soteriological role, much attention has been devoted to explaining the nature of the ultimate truth; less, however, has been paid to understanding the nature of conventional truth, which is often described as "deceptive," "illusion," or "truth for fools." But because of the close relation between the two truths in Madhyamaka, conventional truth also demands analysis. Moonshadows, the product of years of collaboration by ten cowherds engaged in Philosophy and Buddhist Studies, provides this analysis. The book asks, "what is true about conventional truth?" and "what are the implications of an understanding of conventional truth for our lives?" Moonshadows begins with a philosophical exploration of classical Indian and Tibetan texts articulating Candrakati's view, and uses this textual exploration as a basis for a more systematic philosophical consideration of the issues raised by his account. (shrink)
It is often observed in metaethics that moral language displays a certain duality in as much as it seems to concern both objective facts in the world and subjective attitudes that move to action. In this paper, I defend The Dual Aspect Account which is intended to capture this duality: A person’s utterance of a sentence according to which φing has a moral characteristic, such as “φing is wrong,” conveys two things: The sentence expresses, in virtue of its conventional (...) meaning, the belief that φing has a moral property, and the utterance of the sentence carries a generalized conversational implicature to the effect that the person in question has an action-guiding attitude in relation to φing. This account has significant advantages over competing views: (i) As it is purely cognitivist, it does not have the difficulties of expressivism and various ecumenical positions. (ii) Yet, in spite of this, it can explain the close, “meaning-like,” connection between moral language and attitudes. (iii) In contrast to other pragmatic accounts, it is compatible with any relevant cognitivist view. (iv) It does not rest on a contentious pragmatic notion, such as conventionalimplicature. (v) It does not imply that utterances of complex moral sentences, such as conditionals, convey attitudes. In addition, the generalized implicature in question is fully calculable and cancellable. (shrink)
1. Sentences have implicatures. (11, 14, 19)** 2. Implicatures are inferences. (12. 14) 3. Implicatures can’t be entailments. 4. Gricean maxims apply only to implicatures. (16, 17) 5. For what is implicated to be figured out, what is said must be determined first. (12, 13) 6. All pragmatic implications are implicatures. 7. Implicatures are not part of the truth-conditional contents of utterances. (20) 8. If something is meant but unsaid, it must be implicated. (20) 9. Scalar “implicatures” are implicatures. (11) (...) 10. Conventional “implicatures” are implicatures. (shrink)
A sentence in the Resultative perfect licenses two inferences: (a) the occurrence of an event (b) the state caused by this event obtains at evaluation time. In this paper I show that this use of the perfect is subject to a large number of distributional restrictions that all serve to highlight the result inference at the expense of the event inference. Nevertheless, only the event inference determines the truth conditions of this use of the perfect, the result inference being a (...) unique type of conventionalimplicature. I argue furthermore that, since the result state is singular, the event that causes it must also be singular, whereas the Experiential perfect is purely quantificational. But in out-of-the-blue contexts the past tense is also normally interpreted as singular. This leads to a certain amount of competition between the Resultative perfect and the past tense, and it is this competition, I suggest, that maintains the conventional (non-truth conditional) result state inference. (shrink)
Both proposals acknowledge that definite descriptions differ from indefinites in their implications. (Two parenthetical clarifications: (i) "implication" is to be understood here and below as neutral between semantic and pragmatic conveyance; (ii) "semantic" is to be understood to mean "conventional", that is including, in addition to truth conditional impact, anything else that is linguistically encoded.) One of these implications is what is commonly termed "familiarity" ? an assumption that the denotation of the NP has already been introduced, as such, (...) to the addressee of the utterance. The other is uniqueness, or more properly exhaustive application, within the salient discourse context, of the descriptive content of the NP to the intended denotation. However both analyses attempt to derive one or both of these implications pragmatically. Ludlow & Segal propose that familiarity is a conventionalimplicature and uniqueness a conversational implicature. Szabó concurs with Ludlow & Segal that familiarity is more essential to definite descriptions, but attempts to derive both implications pragmatically. (shrink)
Paul Grice warned that ‘the nature of conventionalimplicature needs to be examined before any free use of it, for explanatory purposes, can be indulged in’ (1978/1989: 46). Christopher Potts heeds this warning, brilliantly and boldly. Starting with a definition drawn from Grice’s few brief remarks on the subject, he distinguishes conventionalimplicature from other phenomena with which it might be confused, identifies a variety of common but little-studied kinds of expressions that give rise to it, (...) and develops a formal, multidimensional semantic framework for systematically capturing its distinctive character. The book is a virtuosic blend of astute descriptive observations and technically sophisticated formulations. Fortunately for the technically unsophisticated reader, the descriptive observations can be appreciated on their own. (shrink)
Some presuppositions seem to be weaker than others in the sense that they can be more easily neutralized in some contexts. For example some factive verbs, most notably epistemic factives like know, be aware, and discover, are known to shed their factivity fairly easily in contexts such as are found in (1). (1) a. …if anyone discovers that the method is also wombat-proof, I’d really like to know! b. Mrs. London is not aware that there have ever been signs erected (...) to stop use of the route… c. Perhaps God knows that we will never reach the stars…. (The examples in (1) are all naturally occurring ones, discovered by David Beaver with the aid of Google; cf. Beaver 2002, exx. 32, 43, and 51, respectively.) On the other hand some other factives, e.g. regret, matter, and be surprised, do not exhibit the same type of behavior: (2) a. If any of the students regrets behaving badly, they’ll let us know. b. It doesn’t matter that the chimpanzees escaped. c. Was Bill surprised that spinach was included? Unlike the examples in (1), those in (2) could not be used appropriately in contexts where the speaker was not assuming that the complement clause was true. Our main concern will be trying to find the cause of this difference. However, before we get to that, we will look more closely at the concept “presupposition” itself, as well as its close neighbor in the linguistic literature, “conventionalimplicature” (section 2), and also at various ways of getting rid of presuppositions (section 3). In section 4 we will investigate two possible explanations for differences in presupposition triggering – the “lexical alternative” approach of Abusch (2002, 2005), and a suggestion of Ladusaw’s involving detachability of presuppositions. The final section contains concluding remarks. (shrink)
The main aim of this paper is that of providing a unified analysis for some interesting uses of quotation marks, including so-called scare quotes. The phenomena exemplified by the cases I discuss have remained relatively unexplored, notwithstanding a growing interest in the behavior of quotation marks. They are, however, of no lesser interest than other, more widely studied effectsachieved with the help of quotationmarks. In particular, as I argue in whatfollows, scare quotes and other similar instances bear interesting relations with (...) someimportant themes in the study of natural languages, such as questions regarding alleged devicesof conventionalimplicature, cases of so-called metalinguistic negation, and, moregenerally, problems pertaining to the distinction between semantic and pragmatic fieldsof inquiry.In Section 1, I begin with a description of some examples involvingthe uses of quotation marks I intend to discuss, and I hint at some desiderata fortheir analysis. In Section 2, I temporarily abandon quotation marks, and, inspired by therecent work of Stephen Neale and Kent Bach on alleged devices of conventionalimplicature,I present what I call the theory of message and attachment. In Sections 3 and 4,I return to my initial examples, I employ the theory of message and attachment in theiranalysis, and I discuss certain features regarding the behavior of negation in some related cases. (shrink)
In the 1960’s, both Montague (e.g. 1970, 222) and Grice (1975, 24) famously declared that natural languages were not so different from the formal languages of logic as people had thought. Montague sought to comprehend the grammars of both within a single theory, and Grice sought to explain away apparent divergences as due to the fact that the former, but not the latter, were used for conversation. But, if we confine our concept of logic to first order predicate logic (or (...) FOPL) with identity (that is, omitting everything which is not required for the pursuit of mathematical truth), then there are of course many other aspects, in addition to its use in conversation, which distinguish natural language from logic. Conventionalimplicature, information structure (including presupposition), tense and time reference, and the expression of causation and inference are several of these, which combine as well with syntactic complexities which are unnecessary in first order predicate logic. In this paper I will argue that such distinguishing aspects should be more fully exploited to explain the differences between the material conditional of logic and the indicative conditional of one natural language (English). (shrink)
Mark Richard in his book offers a new and challenging expressivist theory of the use and semantics of slurs (pejoratives). The paper argues that in contrast, the central and standard uses of slurs are cognitive. It does so from the role of stereotypes in slurring, from fi gurative slurs and from the need for cognitive effort (or simple of knowledge of relevant presumed properties of the target). Since cognition has to do with truth and falsity, and since the cognitive task (...) is a good indicator of semantic structure, it seems that the ascription of negative properties etc. indicates that they belong to the meaning of the slur, and that this meaning therefore confers truth-aptness. The (nasty) richness of meaning might vary with pejoratives: all of them involve “contemptible because G” at the very least. The most typical once carry more information. Some of it is given in the form of conceptual links roughly delineating the core stereotype associated with the pejorative, some in the form of fi gurative transfer of properties from some vehicle to the target member of G. So, slurs are not purely performative and expressive, but semantic in thetraditional, truth-directed sense. The truth-gap that might characterize the resulting sentences does not point to pejoratives not having ambition to say true and nasty things, but only to their failure in the attempt. The ambition defi nes the true-directed meanings of the assumptions, the failure just records that these assumptions are false about their targets. The paper leaves it open how central the truth-directed meanings are. The argument suggests that they are pretty central, either part of the core meaning, or of conventionalimplicature. (shrink)
Várias tentativas foram feitas pelos teóricos da referência direta para acomodar o dado intuitivo da opacidade referencial— a não ocorrência de substituição mútua salva veritate de nomes próprios co-referenciais nas orações subordinadas, precedidas por ‘que’, nas orações em que se atribuem atitudes proposicionais. A teoria defendida por Nathan Salmon, em seu livro de 1986 Frege’s Puzzle , é provavelmente a versão mais bem elaborada daquilo a que adiante nos referimos como ‘a Teoria Implicativa’. Salmon sustenta que a opacidade referencial é (...) uma ilusão decorrente de nossa incapacidade de distinguir o conteúdo semântico de atribuições de crenças das suas ‘partilhas pragmáticas’, como Salmon as chama. Lamentavelmente, seu trabalho deixa de todo misterioso o mecanismo rotineiro de produção de tais partilhas pragmáticas. Salmon limita-se a sugerir vagamente que estão envolvidas implicaturas conversacionais griceanas. Minha tese central neste artigo é a de que Salmon está equivocado, visto que as partilhas pragmáticas necessárias à sua teoria não satisfazem o critério de cancelabilidade , o que deveria ocorrer sempre que estamos às voltas com uma genuína implicatura conversacional. O argumento apresentado, até onde o saibamos, é inteiramente original. DOI:10.5007/1808-1711.2010v14n3p405. (shrink)
To the extent that language is conventional, non-verbal individuals, including human infants, must participate in conventions in order to learn to use even simple utterances of words. This raises the question of which varieties of learning could make this possible. In this paper I defend Tomasello’s (The cultural origins of human cognition. Harvard UP, Cambridge, 1999, Origins of human communication. MIT, Cambridge, 2008) claim that knowledge of linguistic conventions could be learned through imitation. This is possible because Lewisian accounts (...) of convention have overstated what one must know to participate in conventions; and because the required knowledge could be learned imitatively. The imitation claim that I defend is consistent with what we know about both the proliferation of conventional behaviours in human children, who are skilful imitators, and the comparative absence of such behaviours in non-human great apes, who are poor at imitative learning. (shrink)
Grice made a distinction between what is said by a speaker of a verbal utterance and what is implicated. What is implicated might be either conven- tional (that is, largely generated by the standing meaning of certain linguistic expressions, such as ‘but’ and ‘moreover’) or conversational (that is, dependent on the assumption that the speaker is following certain rational principles of conversational exchange). What appears to have bound these rather disparate aspects of utterance meaning together, and so motivated the common (...) label of implicature, was that they did not contribute to the truthconditional content of the utterance, that is, the proposition it expressed, or what the speaker of the utterance said. (shrink)
Some linguistic phenomena can occur in uses of language in thought, whereas others only occur in uses of language in communication. I argue that this distinction can be used as a test for whether a linguistic phenomenon can be explained via Grice’s theory of conversational implicature (or any theory similarly based on principles governing conversation). I argue further, on the basis of this test, that conversational implicature cannot be used to explain quantifier domain restriction or apparent (...) substitution failures involving coreferential names, but that it must be used to explain the phenomenon of referential uses of definite descriptions. I conclude with a brief discussion of the relevance of this point to the semantics/pragmatics distinction. (shrink)
I’ve known about conversational implicature a lot longer than I’ve known Larry. In 1967 I read Grice’s “Logical and Conversation” in mimeograph, shortly after his William James lectures, and I read its precursor “(Implication),” section III of “The Causal Theory of Perception”, well before that. And I’ve thought, read, and written about implicature off and on ever since. Nevertheless, I know a lot less about it than Larry does, and that’s not even taking into account everything he has (...) uncovered about what was said on the subject long before Grice, even centuries before. So, now that I’ve betrayed my ignorance, I’ll display my insolence. I’m going to identify the most pervasive and pernicious misconceptions about implicature that I’ve noticed over the years. (shrink)
‘Ju Mipham Rinpoche, (1846-1912) an important figure in the _Ris med_, or non- sectarian movement influential in Tibet in the late 19th and early 20th Centuries, was an unusual scholar in that he was a prominent _Nying ma_ scholar and _rDzog_ _chen_ practitioner with a solid dGe lugs education. He took dGe lugs scholars like Tsong khapa and his followers seriously, appreciated their arguments and positions, but also sometimes took issue with them directly. In his commentary to Candrak¥rti’s _Madhyamakåvatåra, _Mi (...) pham argues that Tsong khapa is wrong to take Candrak¥rti’s rejection of the reflexive character of consciousness to be a rejection of the _conventional _existence of reflexive awareness. Instead, he argues, Candrak¥rti only intends to reject the reflexivity of awareness _ultimately_, and, indeed, Mipham argues, it is simply _obvious _that conventionally, consciousness is reflexive. (shrink)
Conversational implicatures are easy to grasp for the most part. But it is another matter to give a rational reconstruction of how they are grasped. We argue that Grice's attempt to do this fails. We distinguish two sorts of cases: (1) those in which we grasp the implicature by asking ourselves what would the speaker have to believe given that what he said is such as is required by the talk exchange; (2) those in which we grasp the (...) class='Hi'>implicature by asking ourselves why it is that what the speaker said is so obviously not such as is required by the talk exchange. We argue that Grice's account does not fit those cases falling under (2). (shrink)
Abstract: The Gricean theory of conversational implicature has always been plagued by data suggesting that what would seem to be conversational inferences may occur within the scope of operators like believe , for example; which for bona fide implicatures should be an impossibility. Concentrating my attention on scalar implicatures, I argue that, for the most part, such observations can be accounted for within a Gricean framework, and without resorting to local pragmatic inferences of any kin d. However, there remains (...) a small class of marked cases that cannot be treated as conversational implicatures, and they do require a local mode of pragmatic interpretation. (shrink)
This paper argues that the literal meaning of words in a natural language is less conventional than usually assumed. Conventionality is defined in terms that are relative to reasons; norms that are determined by reasons are not conventions. The paper argues that in most cases, the literal meaning of words—as it applies to their definite extension—is not conventional. Conventional variations of meaning are typically present in borderline cases, of what I call the extension-range of literal meaning. Finally, (...) some putative and one or two genuine exceptions are discussed. (shrink)
1. Implicature: some basic oppositions IMPLICATURE is a component of speaker meaning that constitutes an aspect of what is meant in a speaker’s utterance without being part of what is said. What a speaker intends to communicate is characteristically far richer than what she directly expresses; linguistic meaning radically underdetermines the message conveyed and understood. Speaker S tacitly exploits pragmatic principles to bridge this gap and counts on hearer H to invoke the same principles for the purposes of (...) utterance interpretation. The contrast between the said and the meant, and derivatively between the said and the implicated (the meant-but-unsaid), dates back to the fourth century rhetoricians Servius and Donatus, who characterized litotes—the figure of pragmatic understatement—as a figure in which we say less but mean more (“minus dicimus et plus significamus”; see Hoffmann 1987 and Horn 1991a for discussion). In the classical Gricean model, the bridge from what is said (the literal content of the uttered sentence, computed directly from its grammatical structure with the reference of indexicals resolved) to what is communicated is constructed through implicature. As an aspect of speaker meaning, implicatures are by definition distinct from the non-logical inferences that the hearer draws; it is a category mistake to attribute implicatures either to hearers or to sentences (e.g. P and Q) and subsentential expressions (e.g. some). But we can systematically (at least for generalized implicatures; see below) correlate the speaker’s intention to implicate q (in uttering p in context C), the expression p that carries the implicature in C, and the inference of q induced by the speaker’s utterance of p in C. (shrink)
This fresh look at the philosophy of language focuses on the interface between a theory of literal meaning and pragmatics--a philosophical examination of the relationship between meaning and language use and its contexts. Here, Atlas develops the contrast between verbal ambiguity and verbal generality, works out a detailed theory of conversational inference using the work of Paul Grice on Implicature as a starting point, and gives an account of their interface as an example of the relationship between Chomsky's Internalist (...) Semantics and Language Performance. Atlas then discusses consequences of his theory of the Interface for the distinction between metaphorical and literal language, for Grice's account of meaning, for the Analytic/Synthetic distinction, for Meaning Holism, and for Formal Semantics of Natural Language. This book makes an important contribution to the philosophy of language and will appeal to philosophers, linguists, and cognitive scientists. (shrink)
As Grice defined it, a speaker conversationally implicates that p only if the speaker expects the hearer to recognize that the speaker thinks that p. This paper argues that in the sorts of cases that Grice took as paradigmatic examples of conversational implicature there is in fact no need for the hearer to consider what the speaker might thus have in mind. Instead, the hearer might simply make an inference from what the speaker literally says and the situation in (...) which the utterance takes place. In addition, a number of sources of the illusion of conversational implicatures in Grice's sense are identified and diagnosed. (shrink)
Abstract Conspiracy theories should be neither believed nor investigated - that is the conventional wisdom. I argue that it is sometimes permissible both to investigate and to believe. Hence this is a dispute in the ethics of belief. I defend epistemic “oughts” that apply in the first instance to belief-forming strategies that are partly under our control. But the beliefforming strategy of not believing conspiracy theories would be a political disaster and the epistemic equivalent of selfmutilation. I discuss several (...) variations of this strategy, interpreting “conspiracy theory” in different ways but conclude that on all these readings, the conventional wisdom is deeply unwise. (shrink)
Literal meaning is often identified with conventional meaning. In A Nice Derangement of Epitaphs Donald Davidson argues (1) that literal meaning is distinct from conventional meaning, and (2) that literal meaning is identical to what he calls first meaning. In this paper it is argued that Davidson has established (1) but not (2), that he has succeeded in showing that there is a distinction between literal meaning and conventional meaning but has failed to see that literal meaning (...) and first meaning are also distinct. This failure is somewhat surprising, since it is through a consideration of Davidson's notion of radical interpretation that the distinction between literal meaning and first meaning becomes apparent. (shrink)
The moral/conventional task has been widely used to study the emergence of moral understanding in children and to explore the deficits in moral understanding in clinical populations. Previous studies have indicated that moral transgressions, particularly those in which a victim is harmed, evoke a signature pattern of responses in the moral/conventional task: they are judged to be serious, generalizable and not authority dependent. Moreover, this signature pattern is held to be pan-cultural and to emerge early in development. However, (...) almost all the evidence for these claims comes from studies using harmful transgressions of the sort that primary school children might commit in the schoolyard. In a study conducted on the Internet, we used a much wider range of harm transgressions, and found that they do not evoke the signature pattern of responses found in studies using only schoolyard transgressions. Paralleling other recent work, our study provides preliminary grounds for skepticism regarding many conclusions drawn from earlier research using the moral/conventional task. (shrink)
Children, even very young children, distinguish moral from conventional transgressions, inasmuch as they hold that the former, but not the latter, would still be wrong if there was no rule prohibiting them. Many people have taken this finding as evidence that morality is objective, and therefore universal. I argue that reflection on the phenomenon of imaginative resistance will lead us to question these claims. If a concept applies in virtue of the obtaining of a set of more basic facts, (...) then it is authority independent, and we therefore resist the attempts of authorities to claim that it does not apply. Thus, the moral/conventional distinction is a product of imaginative resistance to claims that a concept does not apply when its supervenience base is in place (or vice versa). All we can rightfully conclude from the fact that children are disposed to make the moral/conventional distinction is that our moral concepts belong to the class of authority-independent concepts. Though the set of basic facts in virtue of which an authority-independent concept obtains must be objective, the concept itself might be conventional, inasmuch as we could easily draw its boundaries wider or narrower, or fail to have a concept that corresponds to these properties at all. (shrink)
In this paper I discuss some of the criteria that are widely used in the linguistic and philosophical literature to classify an aspect of meaning as either semantic or pragmatic. With regards to the case of scalar implicature (e.g. some Fs are G implying that not all Fs are G), these criteria are not ultimately conclusive, either in the results of their application, or in the interpretation of the results with regards to the semantics/pragmatics distinction (or in both). I (...) propose a psychologically relevant criterion, that of the primary or secondary role of context. This criterion applies to sub-personal processes that derive the interpretation of a scalar term rather than to the eventual interpretation of the term, and there exist well-established experimental paradigms that can generate quantitative data. I present recent studies on scalar implicature which employ such off-line and real-time paradigms, aiming to demonstrate how research on the semantics/pragmatics distinction can benefit from experimental investigation. (shrink)
In this paper, I attempt to show that the moral/conventional distinction simply cannot bear the sort of weight many theorists have placed on it for determining the moral and criminal responsibility of psychopaths. After revealing the fractured nature of the distinction, I go on to suggest how one aspect of it may remain relevant—in a way that has previously been unappreciated—to discussions of the responsibility of psychopaths. In particular, after offering an alternative explanation of the available data on psychopaths (...) and their judgments of various sorts of norm transgressions, I put forward a hybrid theory of their responsibility, suggesting how they might be criminally responsible, while nevertheless failing to meet the conditions for an important arena of moral responsibility. (shrink)
In this paper we present a modest contribution to the debate on the treatment of the pragmatically determined aspects of utterance meaning. Different authors (Bach 1994, Carston 1988 and 1998, Recanati 1989, Sperber and Wilson 1986, Levinson 2000) have defended different notions (explicature, impliciture, and implicature) to account for the phenomena labeled as Generalized Conversational Implicatures (GCI) by Grice (1989). We offer some arguments for treating some of these examples as implicitures, and for a better characterization of the notion (...) of what is said. (shrink)
Recent work in personal identity has emphasized the importance of various conventions, or ‘person directed practices’ in the determination of personal identity. An interesting question arises as to whether we should think that there are any entities that have, in some interesting sense, conventional identity conditions. We think that the best way to understand such work about practices and conventions is the strongest and most radical. If these considerations are correct, persons are, on our view, conventional constructs: they (...) are in part constituted by certain conventions. A person exists only if the relevant conventions exist. A person will be a conscious being of a certain kind combined with a set of conventions. Some of those conventions are encoded in the being itself, so requiring the conventions to exist is requiring the conscious being to be organized in a particular way. In most cases the conventions in question are settled. There is no dispute about what the conventions are, and thus no dispute about which events a person can survive. These are cases where we take the conventions so much for granted, that it is easy to forget that they are there, and that they are necessary constituents of persons. Sometimes though, conventions are not settled. Sometimes there is a dispute about what the conventions should be, and thus a dispute about what events a person can survive. These are the traditional puzzle cases of personal identity. That it appears t h a t conventions play a part in determining persons’ persistence conditions only in these puzzle cases is explained by the fact that only in these cases are the conventions unsettled. Settled or not though, conventions are necessary constituents of persons. (shrink)
Socially responsible investment is a rapidly emerging phenomenon within the field of personal investment. However, the factors that lead investors to choose socially responsible investment products are not well understood, especially in an Australian context. This study provides a comparative examination of conventional and socially responsible investors, with the aim of identifying such factors. A total of 55 conventional investors and 54 ethical investors participated in the study by completing mailed questionnaires about their investment and general behaviour and (...) their attitudes and beliefs. Results indicated some important differences between socially responsible and conventional investors in their beliefs of the importance of ethical issues, their investment decision-making style, and their perceptions of moral intensity. These results support the notion that socially responsible investors differ in critical ways to conventional investors, and are discussed in terms of theoretical and practical implications. (shrink)
Medical professionals, including mental health professionals, largely agree that moral judgment should be kept out of clinical settings. The rationale is simple: moral judgment has the capacity to impair clinical judgment in ways that could harm the patient. However, when the patient is suffering from a "Cluster B" personality disorder, keeping moral judgment out of the clinic might appear impossible, not only in practice but also in theory. For the diagnostic criteria associated with these particular disorders (Antisocial, Borderline, Histrionic, Narcissistic) (...) are expressed in overtly moral language. I consider three proposals for dealing with this problem. The first is to eliminate the Cluster B disorders from the DSM on the grounds that they are moral, rather than mental, disorders. The second is to replace the morally laden language of the diagnostic criteria with morally neutral language. The third is to disambiguate the notion of moral judgment so as to respect the distinction between having morally disvalued traits and having moral responsibility for those traits. Sensitivity to this distinction enables the clinician, at least in theory, to employ morally laden diagnostic criteria without adopting the sort of morally judgmental (and potentially harmful) attitude that results from the tacit presumption of moral responsibility. I argue against the first two proposals and in favor of the third. In doing so, I appeal to Grice's distinction between conventional and conversational implicature. I close with a few brief remarks on the irony of retaining overtly moral language in an ostensibly medical manual for the diagnosis of mental disorders. (shrink)
This review is a critical discussion of three main claims in Debs and Redhead’s thought-provoking book Objectivity, Invariance, and Convention. These claims are: (i) Social acts impinge upon formal aspects of scientific representation; (ii) symmetries introduce the need for conventional choice; (iii) perspectival symmetry is a necessary and sufficient condition for objectivity, while symmetry simpliciter fails to be necessary.
In a recent paper, Shaun Nichols (2002) presents a theory that offers an explanation of the cognitive processes underlying moral judgment. His Affect-Backed Norms theory claims that (i) a set of normative rules coupled with (ii) an affective mechanism elicits a certain response pattern (which we will refer to as the “moral norm response pattern”) when subjects respond to transgressions of those norms. That response pattern differs from the way subjects respond to violations of norms that lack the affective backing (...) (here referred to as the “conventional norm response pattern”). In response, Daniel Kelly and colleagues (2007) present data that, the authors claim, undermine Nichols’ Affect-Backed Norms theory by showing that there are novel cases in which (i) and (ii) are in place, yet subjects respond in the way typical of the conventional response pattern. In Section I of this paper we summarize the challenge to the Affect-Backed Norms theory from the novel cases introduced by Kelly et al. We then show how the challenge is potentially flawed because no verification was provided that subjects were experiencing affect when reading the cases, nor was level of affect controlled for. In Section II, we describe the study we conducted to determine what level of affect was induced when subjects read the novel cases. In Section III, we present our findings, namely that subjects respond to the novel cases with different levels of affect, which tracks their judgments of the severity of the transgressions in the cases. In Section IV, we discuss the results and show that the Affect-Backed Norms theory can explain subjects’ responses to the novel cases given this new 2 information about affective response. In Section V, we conclude with a thought about how these findings inform the traditional moral/conventional distinction. (shrink)
This study provides a comparative analysis of students' self-reported beliefs and behaviors related to six analogous pairs of conventional and digital forms of academic cheating. Results from an online survey of undergraduates at two universities (N = 1,305) suggest that students use conventional means more often than digital means to copy homework, collaborate when it is not permitted, and copy from others during an exam. However, engagement in digital plagiarism (cutting and pasting from the Internet) has surpassed (...) class='Hi'>conventional plagiarism. Students also reported using digital "cheat sheets" (i.e., notes stored in a digital device) to cheat on tests more often than conventional "cheat sheets." Overall, 32% of students reported no cheating of any kind, 18.2% reported using only conventional methods, 4.2% reported using only digital methods, and 45.6% reported using both conventional and digital methods to cheat. "Digital only" cheaters were less likely than "conventional only" cheaters to report assignment cheating, but the former was more likely than the latter to report engagement in plagiarism. Students who cheated both conventionally and digitally were significantly different from the other three groups in terms of their self-reported engagement in all three types of cheating behavior. Students in this "both" group also had the lowest sense of moral responsibility to refrain from cheating and the greatest tendency to neutralize that responsibility. The scientific and educational implications of these findings are discussed in this study. (shrink)
Bare plurals ( dogs ) behave in ways that quantified plurals ( some dogs ) do not. For instance, while the sentence John owns dogs implies that John owns more than one dog, its negation John does not own dogs does not mean “John does not own more than one dog”, but rather “John does not own a dog”. A second puzzling behavior is known as the dependent plural reading; when in the scope of another plural, the ‘more than one’ (...) meaning of the plural is not distributed over, but the existential force of the plural is. For example, My friends attend good schools requires that each of my friends attend one good school, not more, while at the same time being inappropriate if all my friends attend the same school. This paper shows that both these phenomena, and others, arise from the same cause. Namely, the plural noun itself does not assert ‘more than one’, but rather the plural denotes a predicate that is number neutral (unspecified for cardinality). The ‘more than one’ meaning arises as an scalar implicature, relying on the scalar relationship between the bare plural and its singular alternative, and calculated in a sub-sentential domain; namely, before existential closure of the event variable. Finally, implications of this analysis will be discussed for the analysis of the quantified noun phrases that interact with bare plurals, such as indefinite numeral DPs ( three boys ), and singular universals ( every boy ). (shrink)
I argue for a subsumption of any version of Grice’s first quantity maxim posited to underlie scalar implicature, by developing the idea of implicature recovery as a kind of explanatory inference, as e.g. in science. I take the applicable model to be contrastive explanation, while following van Fraassen’s analysis of explanation as an answer to a why-question. A scalar implicature is embedded in such an answer, one that meets two probabilistic constraints: the probability of the answer, and (...) ‘favoring’. I argue that besides having application at large, outside of linguistic interpretation, these constraints largely account not only for implicatures based on strength order, logical and otherwise, but also for unordered cases. I thus suggest that Grice’s maxim and its descendants are expressions of general explanatory constraints, as they happen to be manifested in this particular explanatory task. I conclude by briefly discussing how I accordingly view Grice’s system outside of scalar implicature. (shrink)
. Two simple generalized conversational implicatures are investigated :(1) the quantitative scalar implicature associated with ‘or’, and (2) the ‘not-and’-implicature, which is the dual to (1). It is argued that it is more fruitful to consider these implicatures as rules of interpretation and to model them in an algebraic fashion than to consider them as nonmonotonic rules of inference and to model them in a proof-theoretic way.
I first try to identify what problem, if any conceptual art poses for philosophical aesthetics. It is harder than one might think to formulate some claim about traditional art with which much conceptual art is inconsistent. The idea that sense experience plays a special role in the appreciation of traditional artworks falls foul of literature. Instead I focus on the idea that conceptual art exhibits a particularly loose relation between the properties with which we engage in appreciating it and the (...) properties on which those artistic properties depend. In Part II, I then offer an account of how conceptual art communicates, and attempt to use it to illuminate some prominent features of that art. I suggest it works by frustrating certain fundamental expectations with which we approach it. In this it is analogous to certain ways of indirectly communicating in conversation – certain kinds of conversational implicature. At the close, I ask whether this account allows us to address the problem identified in Part I. (shrink)
Conventional wisdom has it that there is a class of attitude ascriptions such that in making an ascription of that sort, the ascriber undertakes a commitment to specify the contents of the ascribee’s head in what might be called a notionally sensitive, ascribee-centered way. In making such an ascription, the ascriber is supposed to undertake a commitment to specify the modes of presentation, concepts or notions under which the ascribee cognizes the objects (and properties) that her beliefs are about. (...) Consequently, it is widely supposed that an ascription of the relevant sort will be true just in case it specifies either directly or indirectly both what the ascribee believes and how she believes it. The class of “notionally sensitive” ascriptions has been variously characterized. Quine (1956) calls the class I have in mind the class of notional ascriptions and distinguishes it from the class of relational ascriptions. Others call the relevant class the class of de dicto ascriptions and distinguish it from the class of de re ascriptions. More recently, it has been called the class of notionally loaded ascriptions (Crimmins 1992, 1995). So understood, the class can be contrasted with the class of notionally neutral ascriptions. Just as the class of notional/de dicto/notionally loaded ascriptions is supposed to put at semantic issue the ascribee’s notions/conceptions/modes of presentation, so ascriptions in the relational/de re/notionally neutral class are supposed not to.. (shrink)
Abstract The effect of inducing negative, positive or neutral affect on the recall of moral and conventional transgressions and positive moral and conventional acts was examined. It was found that inducing negative affect was associated with higher recall of moral transgressions while inducing positive affect was associated with higher recall of positive moral acts. Affect induction condition did not have a significant effect on the recall of the conventional transgressions or positive acts. The results are interpreted within (...) the Violence Inhibition Mechanism model of moral development (Blair, 1995) and by reference to a new, hypothesised system, the Smiling Reward Response. (shrink)
In this paper we study language use and language organisation by making use of Lewisean signalling games. Standard game theoretical approaches are contrasted with evolutionary ones to analyze conventional meaning and conversational interpretation strategies. It is argued that analyzing successful communication in terms of standard game theory requires agents to be very rational and fully informed. The main goal of the paper is to show that in terms of evolutionary game theory we can motivate the emergence and self-sustaining force (...) of (i) conventional meaning and (ii) some conversational interpretation strategies in terms of weaker and, perhaps, more plausible assumptions. (shrink)
A venerable tradition in philosophy sees significance in the fact that, from a subjective viewpoint, some rules seem to impress themselves upon us with a distinctive kind of authority or normative force: one feels their pull and is drawn to act in accordance with such rules unconditionally, and violations strike one as egregious. Though the first person experience of it can be mystifying, I believe this phenomenology is just one aspect of the operation of a psychological system crucial to morality. (...) Building on previous work, I’ll call this property of certain rules independent normativity. After describing that property, I situate it with respect to earlier work done on the so-called moral/conventional distinction, and suggest new questions it raises about morality and emotion. Sripada and Stich (2006) posit a model of the cognitive architecture underlying an important element of human rule cognition.1 Following them, I’ll call this the norm system, and the rules cognized by it social norms. One feature of this system is that it imputes those rules it processes with independent normativity. Understood this way, those rules that enjoy independent normativity do so not in virtue of any particular content, but because the mental representations that express them occupy a certain functional role in human minds (that I’ll call SN-functional role). (shrink)
I describe conventions not of correct reasoning but of giving and taking advice about reasoning. This article is asn anticipation of part of the first chapter of my forthcoming *Bounded Thinking*, OUP 2012.
More than half a century ago, the Supreme Court held that the free speech protection of the First Amendment is not limited to verbal communication, but also applies to such expressive conduct as saluting a flag or burning a flag. Even though the Supreme Court has decided a number of important cases involving expressive conduct, the Court has never announced any standards for distinguishing such conduct from conduct without communicative value. The aim of this paper is to examine which conceptions (...) of nonverbal expression underlie judicial decisions on expressive conduct, and to offer an account of expressive conduct grounded in contemporary semantic theory. The central hypothesis of this paper is that significance of expressive conduct can be explained by principles that explain important features of linguistic meaning. I propose an analysis of expressive conduct that takes the meaningfulness of conduct as a function of the action and its consequences in context. I develop a theory of expressive conduct whose underlying conception of expression is based on a number of ideas from speech act theory. These are Grice's account of nonnatural meaning, Austin's theory of illocutionary force, and Grice's work on conversational implicature. My analysis understands the meaningfulness of conduct in terms of its relational properties and relevant features of the context upon which illocutionary force, perlocutionary properties and implicature are predicated. The natural and conventional properties of types of conduct, features of the context, and underlying social and cultural presumptions and expectations about human conduct thus play a role in the constitution of symbolic speech. (shrink)
Ethicists have long observed that unethical communication may result from texts that contain no overt falsehoods but are nevertheless misleading. Less clear, however, has been the way that context and text work together to create misleading communication. Concepts from linguistics can be used to explain implicature and indirect speech acts, two patterns which, though in themselves not unethical, may allow misinterpretations and, therefore, create potentially unethical communication. Additionally, sociolinguistic theory provides insights into why writers in business and other professions (...) are prone to use these patterns. An analysis of five cases shows that implicature and indirectness are sometimes used intentionally to deceive readers. However, their use may also reflect other motives such as the desire to mitigate negative information or to show deference to an unfamiliar or powerful reader. Although implicature and indirectness are not intended to deceive in these cases, they can lead to a loss of clarity and to subsequent ethical problems when readers misinterpret texts. (shrink)
This paper is a contribution to the program of constructing formal representations of pragmatic aspects of human reasoning. We propose a formalization within the framework of Adaptive Logics of the exclusivity implicature governing the connective ‘or’.Keywords: exclusivity implicature, Adaptive Logics.
As far as we are aware, this study presents the first comparative analysis of the stock picking and market timing abilities of managers of conventional and socially responsible (SR) pension funds, and of their use of superior information. For the United Kingdom, the results obtained show a slight stock picking ability on the part of SR pension fund managers (although it disappears if multifactorial models are considered), and a negative market timing ability on the part of both SR and (...)conventional pension fund managers (these results hold for multifactorial models controlled by home bias). In relation to the management styles, both conventional and SR pension funds usually invest in small cap and growth values, although it is the SR pension funds that are the most exposed to these styles. We also observed that, while conventional pension fund managers make certain use of superior information to follow stock picking strategies, managers of SR pension funds use superior information to follow market timing strategies. (shrink)
Management scholars, practitioners, and policy makers alike have sought to develop a deeper understanding of recent business crises—including corporate scandals, the collapse of financial institutions, and deep recession—in order to prevent their recurrence. Among the “culprits” that have been identified is Conventional management theory based upon a moral-point-of-view founded on assumptions of materialism and individualism. There have been calls to move beyond the dominant profit maximization paradigm and think about other, potentially more compelling, corporate objectives (Hamel, 2009 ). In (...) this article, we respond to those calls, and seek to develop what we call Radical resource-based theory (RBT), which draws from and contrasts with the highly-influential Conventional RBT. Radical RBT defines the value of resources more broadly than profit maximization, rarity as an occasion for stewardship, inimitability as an opportunity for teaching, and non-substitutability as an opportunity to meet a panoply of human needs. This augmentation of RBT promises to help managers and scholars address a myriad of problems that are insoluble under Conventional assumptions. More generally, it shows the value of broadening management theory to a radical perspective by relaxing assumptions of self-interest and materialism. (shrink)
The use of organic farming technologies has certain advantages in some situations and for certain crops such as maize; however, with other crops such as vegetables and fruits, yields under organic production may be substantially reduced compared with conventional production. In most cases, the use of organic technologies requires higher labor inputs than conventional technologies. Some major advantages of organic production are the conservation of soil and water resources and the effective recycling of livestock wastes when they are (...) available. (shrink)
Debates over the future of agriculture in North Americaestablish a dialectical opposition between conventional,industrial agriculture and alternative, sustainable agriculture.This opposition has roots that extend back to the 18th century inthe United States, but the debate has taken a number ofsurprising turns in the 20th century. Originally articulated as aphilosophy of the left, industrial agriculture has utilitarianmoral foundations. In the US and Canada, the articulation of analternative to industrial agriculture has drawn upon threecentral themes: the belief that agriculture is, in (...) some way, tiedto democracy; the belief that complex bureaucratic organizationsare inherently opposed to human interests; and the belief thatthe family farms characteristic of 19th century North Americatend to produce people of superior moral character. It has proveddifficult to weave these themes into a coherent vision ofagriculture for the 21st century. Often, risk and health-basedconcerns are the basis for public criticism of conventionalagriculture, but these do not conflict with the utilitarianorientation of the industrial model, and are easily incorporatedinto it. If there is to be a philosophical debate over the futureof agriculture, we must find some way to rehabilitate thequasi-Aristotelean view of agriculture that emerges from thethree critical themes noted above. (shrink)
Are the categories used to study the social world and acting on it real or conventional ? An empirical answer to that question is given by an analysis of the debates about the quality of statistics produced by the European National Institues of statistics in the 1990s. Six criteria of quality were then specified: relevance, accuracy, timeliness, accessibility, comparability and coherence. How do statisticians and users of statistics deal with the tension produced by their objects being both real (they (...) exist before their measurement) and conventionally constructed (they are in a way, created by these conventions)? In particular, the technical and sociological distinction between the criteria of relevance and accuracy implies a realistic interpretation, desired by users, but that is nonetheless problematic. (shrink)
My aim in this note is to disambiguate various senses of ‘conventional’ that in the philosophy of physics have been frequently conflated. As a case study, I will refer to the well-known issue of the conventionality of simultaneity in the special theory of relativity, since it is particularly in this context that the above mentioned confusion is present.
The stakeholder approach offers the opportunity to consider corporate responsibility in a wider sense than that afforded by the stockholder or shareholder approaches. Having said that, this article aims to show that this theory does not offer a normative corporate responsibility concept that can be our response to two basic questions. On the one hand, for what is the company morally responsible and, on the other hand, why is the corporation morally responsible in terms of conventional and post-conventional (...) perspectives? The reason why the stakeholder approach does not offer such a definition, as we shall see, is because the normative stakeholder approaches tend to confuse the social validity with the moral validity or legitimacy. It leads us to a conventional definition of corporate moral responsibility (CMR) that is not relevant to the pluralistic and global framework of our societies and economies. The purpose of this paper is to demonstrate this intuition. (shrink)
It is often assumed that conventional ethics will contribute positively to economics and business, but here, this judgment will be examined. The conventional ethics of our time is dominated by altruistic philosophy, which has deep roots in religion. Such an idealistic ‘altruistic ethics’ especially emphasizes helping the least advantaged. This principle is contrasted with a more profane ‘reciprocal ethics.’ This term is used for the principle of mutual advantage central to a number of significant philosophers. This latter principle (...) is compatible with the practical norms constituting the morals of the market, while the former implies major adjustment of behavior and policies. Many ethicists consider their field to be ‘applied ethics,’ bringing the concrete rules and practices of the economic sector closer to honored first principles of philosophy. Is it reasonable to expect an influence by the ideas of altruistic ethics to improve the morals and policies of the economy? The process of the Heavily Indebted Poor Countries (HIPC) Initiative of the United Nations illustrates a probably crucial connection with altruistic ethics. Is this project, supported by religiously inspired groups, a sound way to treat a serious global problem? The disadvantages of this project are discussed and alternatives with better potential are presented. The article suggests that altruistic ethics is a dubious foundation for constructive morality and that its dominance in contemporary philosophy constitutes a major obstacle to a more open-minded analysis and sound policies. (shrink)
In recent years, analytically trained philosophers have given extensive attention to various issues involved in the “culture wars,” including abortion, same-sex marriage, stem-cell research, and assisted suicide. There are, however, moral judgments that virtually no one questions. Defenses of adult-child sex, for example, are rare. There is also “conventional immorality”—the breach of conventional moral standards within roughly defined limits that at least limit the resulting damage to third parties and social institutions. These phenomena frame moral discussion even when, (...) as often happens, conventional people are in serious moral disagreement. In this essay I try to make sense of the phenomenon; in a subsequent essay I will show how conventional morality contains within itself the seeds of its collapse, and hence requires support from human nature, either rationally discovered or understood through revelation accepted in faith. (shrink)
The conventional wisdom among many sociologists is (1) that it is their prerogative to define, document, and explain the inequalities that exist in society and (2) that there are two general theoretical perspectives useful for studying inequality: functionalism and conflict theory. Some scholars have recently challenged the latter portion of this view by advocating the development of more interpretive, interactionist approaches. However, these scholars'' agendas often tend to perpetuate the first half of the conventional wisdom. While interactionists (and (...) other constructionist scholars) can choose to study inequality in any number of ways, I argue that the most distinctive contribution they can make is to focus on the meanings that inequalities have for people in everyday life, as well as how those meanings are achieved. (shrink)
Developmental psychologists have long argued that the capacity to distinguish moral and conventional transgressions develops across cultures and emerges early in life. Children reliably treat moral transgressions as more wrong, more punishable, independent of structures of authority, and universally applicable. However, previous studies have not yet examined the role of these features in mature moral cognition. Using a battery of adult-appropriate cases (including vehicular and sexual assault, reckless behavior, and violations of etiquette and social contracts) we demonstrate that these (...) features also distinguish moral from conventional transgressions in mature moral cognition. Each hypothesized moral transgressions was treated as strongly and clearly immoral. However, our data suggest that although the majority of hypothesized conventional transgressions also form an obvious cluster, social conventions seem to lie along a continuum that stretches from mere matters of personal preference (e.g., getting tattoos or wearing black shoes with a brown belt) to transgressions that are treated as matters for legitimate social sanction (e.g., violating traﬃc laws or not paying your taxes). We use these ﬁndings to discuss issues of universality, domain-speciﬁcity, and the importance of using a well-studied set of moral scenarios to examine clinical populations and the underlying neural architecture of moral cognition. (shrink)
Separate focus on crop fertilization or feeding practices inadequately describes nitrogen (N) loss from mixed dairy farms because of (1) interaction between animal and crop production and between the production system and the manager, and (2) uncertainties of herd N production and crop N utilization. Therefore a systems approach was used to study N turnover and N efficiency on 16 conventional and 14 organic private Danish farms with mixed animal (dairy) and crop production. There were significant differences in N (...) surplus at the farm level (242 kg. N/ha. vs. 124 kg. N/ha. on conventional and organic dairy farms respectively) with a correlation between stocking rate and N surplus. N efficiency was calculated as the output of N in animal products divided by the net N import in fodder, manure and fertilizer. N turnover in herd and individual crops calculated on selected farms showed differences in organic and conventional crop N utilization. This is explained via a discussion of the rationality behind the current way of planning the optimum fertilizer application in conventional agriculture. The concept of marginal N efficiency is insufficient for correcting problems of N loss from dairy farms. Substantial reductions in N loss from conventional mixed dairy farms is probably unlikely without lower production intensity. The concept of mean farm unit N efficiency might be a way to describe the relation between production and N loss to facilitate regulation. This concept is linked to differing goals of agricultural development—i.e. intensification and separation vs. extensification and integration. It is discussed how studies in private farms—using organic farms as selected critical cases—can demonstrate possibilities for balancing production and environmental concern. (shrink)
In this paper we study language use and language organisation by making use of Lewisean signalling games. Standard game theoretical approaches are contrasted with evolutionary ones to analyze conventional meaning and conversational interpretation strategies. It is argued that analyzing successful communication in terms of standard game theory requires agents to be very rational and fully informed. The main goal of the paper is to show that in terms of evolutionary game theory we can motivate the emergence and self-sustaining force (...) of (i) conventional meaning and (ii) some conversational interpretation strategies in terms of weaker and, perhaps, more plausible assumptions. (shrink)
All humans can interpret sentences of their native language quickly and without effort. Working from the perspective of generative grammar, the contributors investigate three mental mechanisms, widely assumed to underlie this ability: compositional semantics, implicature computation and presupposition computation. This volume brings together experts from semantics and pragmatics to bring forward the study of interconnections between these three mechanisms. The contributions develop new insights into important empirical phenomena; for example, approximation, free choice, accommodation, and exhaustivity effects.
In this book Jeremy Dunning-Davies deals with the influence that "conventional wisdom" has on science, scientific research and development. He sets out to explode' the mythical conception that all scientific topics are open for free discussion and argues that no-one can openly raise questions about relativity, dispute the 'Big Bang' theory, or the existence of black holes, which all seem to be accepted facts of science rather than science fiction. In today's modern climate with "Britain's radioactive refuse heap already (...) big enough to fill the Royal Albert Hall" (Edmund Conway, Economics Editor The Daily Telegraph 28.11.06), it is alarming that there are potential advances in hadronic mechanics which could conceivably pave the way for new clean energies and even a safe in-house method for the disposal of nuclear waste, that have not even been considered by the present establishment. These examples are from the field of physics but there can be little doubt that outside factors have affected the progress of most, if not all, branches of science for many years. Factors other than purely scientific ones still appear to be exerting tremendous influences on progress in a wide variety of fields. Is it too idealistic or nai;ve, to expect that science should remain pure and stay unaffected by such factors? Dr Dunning-Davies presents a beautifully written argument that if science is to progress, and be of any real use, these external factors must be held at bay. (shrink)
Abstract: Concern for values in education is growing. In Canada and other countries, educationalists are becoming more aware of the need for providing for full and open discussions about moral matters. Kohlberg in the United States, Beck in Canada, and Wilson in Great Britain are three leading theorists who are involved in experimental work in moral education. In this paper, some of the ideas of these theorists are compared with reference to the development of post?conventional moral thinking in people.
I want now to argue that just as no intentional representations of retinal images intervene between physical objects and the seeing of those objects, no representations of speaker intentions in speaking need intervene between world affairs spoken of by speakers and hearers' understandings of those words.1 When conventional signs are true or satisfied and when this has come about in the normal way, conventional signs are locally recurrent natural signs. True, tokens of the same conventional sign may (...) have diverse etiologies, through different people's perceptual systems and cognitive systems. They differ from more ordinary recurrent natural signs in that there will usually be numerous different kinds of causal paths to their production, depending on the ways that different speakers have managed to translate diverse prior natural signs into a uniform medium of thought and expression. But there are reasons why the same linguistic form continues to coincide with the same kind of represented affair over a certain domain --it is no accident-- and we have decided to take that as the primary criterion for a locally recurrent sign (Chapter Six). Assuming that this step in the production of a conventional sign has been accomplished through normal mechanisms --the speaker is not confused, does not lie, and so forth-- then reading a conventional sign is mainly a matter of tracking its natural domain, that is, determining what reproducing family it has been copied from. Compare tracking the.. (shrink)
The doctrine of the two truths - a conventional truth and an ultimate truth - is central to Buddhist metaphysics and epistemology. The two truths (or two realities), the distinction between them, and the relation between them is understood variously in different Buddhist schools; it is of special importance to the Madhyamaka school. One theory is articulated with particular force by Nagarjuna (2nd C CE) who famously claims that the two truths are identical to one another and yet distinct. (...) One of the most influential interpretations of Nagarjuna's difficult doctrine derives from the commentary of Candrakarti (6th C CE). In view of its special soteriological role, much attention has been devoted to explaining the nature of the ultimate truth; less, however, has been paid to understanding the nature of conventional truth, which is often described as "deceptive," "illusion," or "truth for fools." But because of the close relation between the two truths in Madhyamaka, conventional truth also demands analysis. Moonshadows, the product of years of collaboration by ten cowherds engaged in Philosophy and Buddhist Studies, provides this analysis. The book asks, "what is true about conventional truth?" and "what are the implications of an understanding of conventional truth for our lives?" Moonshadows begins with a philosophical exploration of classical Indian and Tibetan texts articulating Candrakati's view, and uses this textual exploration as a basis for a more systematic philosophical consideration of the issues raised by his account. (shrink)
Abstract We suggest in this paper that attempts to segregate social?conventional reasoning from the moral domain may represent an artifactual division, one that ignores major philosophic and psychological traditions and cultural constructs regarding the moral self. We address such issues as the individual, social, and relational dimensions of morality; the cultural context of moral development and behavior; and whether morality is solely a matter of justice, harm and welfare considerations, or concerned as well with culturally variable definitions of the (...) good self and the good society, with role obligations, and with caring and affective aspects of human experience. We conclude with a call for continuing narrative and anthropological approaches to the study of moral development in order to reach a fuller understanding of the multiple facets of moral life. (shrink)
The existence of “local implicatures” has been the topic of much recent debate. The purpose of this paper is to contribute to this debate by asking what we can learn from three puzzles, namely, the cancellation of such implicatures by or both, their behavior in the complement clauses of negative factive verbs such as sorry, and their behavior in root and embedded questions. Two basic approaches to local implicatures have been advanced: a fully pragmatic account in which local implicatures result (...) from conventional Gricean principles and a semantic account according to which the generation of implicatures is interwoven with compositional, grammatical mechanisms. We argue that the lesson to be learned from our three case studies is that some kind of approach along the latter, grammatical line is necessary to account for the data. (shrink)
Tsong khapa, following Candrakīrti closely, writes that "'Convention'1 refers to a lack of understanding or ignorance; that is, that which obscures or conceals the way things really are" (Ocean of Reasoning 480–481).2 Candrakīrti himself puts the point this way:Obscurational truth3 is posited due to the force of afflictive ignorance, which constitutes the limbs of cyclic existence. The śrāvakas, pratyekabuddhas and bodhisattvas, who have abandoned afflictive ignorance, see compounded phenomena to be like reflections, to have the nature of being created; but (...) these are not truths for them because they are not fixated on things as true. Fools are deceived, but for those others—just like an illusion—in virtue of being .. (shrink)
Many take Malaments result that the standard Einstein simultaniety relation is uniquely definable from the causal structure of Minkowski space-time to be tantamount to a refutation of the claim that criterion for simultaneity in the special theory of relativity (STR) is a matter of convention. I call into question this inference by examining concrete alternatives and suggest that what has been overlooked is why it should be assumed that in STR simultaneity must be relative only to a frame of reference (...) (or an inertial observer) and not to other parameters as well. (shrink)