Resolution of Frege's Puzzle by denying that synonym substitution in logical truths preserves sentence sense and explaining how logical form has semantic import. Intensional context substitutions needn't preserve truth, because intercepting doesn't preserve sentence meaning. Intercepting is nonuniformly substituting a pivotal term in syntactically secured truth. Logical sentences (GG: Greeks are Greeks; gg: Greece is Greece) and their synonym interceptions (GH: Greeks are Hellenes; gh: Greece is Hellas) share factual content (extrasentential reality asserted). Semantic (cognitive) content is (identifiable with) factual (...) content in synthetic predications, but not logical sentences and interceptions. Putnam's Postulate (Logical form has semantic import) entails interception nonsynonymy. Syntax and vocabulary explain only the factual content of synthetic predications; extrasentential reality explains their truth. Construction of logical factual content explains logical necessity. Terms retain objectual reference, but logical syntax preempts their function (and thereby function of extrasentential reality) in explaining truth. Grasping the facts GG/gg assert entails understanding this. Understanding what GH states requires some recognition that GH must be true just because GmH ("Greeks" means Hellenes), and GmH ("Greeks" means what "Hellenes" means) state an empirical fact. GH (but not GG) is standardly used to express that fact. Church's <span class='Hi'>Test</span> exposes puzzles. QMi sentences ("Ex" means Ex), and QTi sentences (p≡it is true p≡"p" is true) are metalogical necessities, true by syntax. Intercepting QMi creates empirical QM contingencies ("Ex" means Ey). Synonymy turns semantic contingencies (GmH/GmH) into metalogical (GmG/GmG) and lexical (GH) necessities. That transformation is syntactic, via the syntactic duality of definite descriptions. GmH is a contingent copredication, and a lexically necessary referential identity with rigidly codesignating indexicals. Metalogical sentences may be about expressional matter or what it expresses (meaning, proposition). GG (Griechen sind Griechen) has GG's semantic content, but the referent expression switches. Metalogical syntax secures truth by self-referential quotational indexing. Metalogically, referents are identified with intrasentential replica. Extrasentential identifications are metalogically irrelevant. (shrink)
This paper advances a detailed exploration of the complex relationships among terms, concepts, and synonymy in the UMLS Metathesaurus, and proposes the study and understanding of the Metathesaurus from a model-theoretic perspective. Initial sections provide the background and motivation for such an approach, and a careful informal treatment of these notions is offered as a context and basis for the formal analysis. What emerges from this is a set of puzzles and confusions in the Metathesaurus and its literature pertaining (...) to synonymy and its relation to terms and concepts. A model theory for a segment of the Metathesaurus is then constructed, and its adequacy relative to the informal treatment is demonstrated. Finally, it is shown how this approach clarifies and addresses the puzzles educed from the informal discussion, and how the model-theoretic perspective may be employed to evaluate some fundamental criticisms of the Metathesaurus. (shrink)
W. V. O. Quine's well-known attack upon the analytic-synthetic distinction is held to affect only one of the two species of analytic statements he distinguishes. In particular it is not directed at and does not affect the so-called logical truths. In this paper the scope of Quine's attack is extended so as to embrace the logical truths as well. It is shown that the unclarifiability of the notion of 'synonymy' deprives us not only of "analytic statements that are obtainable (...) from logical truths by the replacement of synonyms with synonyms" but of "logical truths" themselves. (shrink)
On what seems to be the best interpretation, what Quine calls 'the problem of synonymy' in Two Dogmas is the problem of approximating the extension of our pretheoretic concept of synonymy by clear and respectable means. Quine thereby identified a problem which he himself did not think had any solution, and so far he has not been proven wrong. Some difficulties for providing a solution are discussed in this paper.
The very idea of informative analysis gives rise to a well-known paradox. Yet a parallel puzzle, herein called the paradox of synonymy, arises for statements which do not express analyses. The paradox of synonymy has a straightforward metalinguistic solution: certain words are referring to themselves. Likewise, the paradox of analysis can be solved by recognizing that certain expressions in an analysis statement are referring to their own semantic structures.
(in Chomsky and His Critics, edited [heroically] by Louise Antony and Norbert Hornstein, Blackwell 2003) You may need to “Rotate View, Clockwise” to get the .pdf file to appear properly. This paper was written in 1998, and so may be past its use-by date. Updated versions of various bits of the paper appear elsewhere; see note 1. More Truth in Advertising: I’m not criticizing Chomsky; though I am being critical, and Chomsky does figure prominently. The idea, as the subtitle suggests, (...) is that there are analytic truths–even if the notion of synonymy is suspect. The trick involves (can you guess?) combining, in the right way, a neo-Davidsonian event semantics with a Minimalist syntax. Blatant Advertising: get hold of the entire book if only for Chomsky’s replies; for anyone interested Chomsky’s conception of meaning (and his semantic internalism), see especially his replies to Egan, Rey, Ludlow, Horwich, and Pietroski. (shrink)
The main purpose of this paper is to propose and defend anew definition of synonymy. Roughly (and slightly misleadingly), theidea is that two expressions are synonymous iff intersubstitutions insentences preserve the degree of doxastic revisability. In Section 1 Iargue that Quine''s attacks on analyticity leave room for such adefinition. The definition is presented in Section 2, and Section 3elaborates on the concept of revisability. The definition is defendedin Sections 4 and 5. It is, inter alia, shown that the definition (...) hasdesired formal properties. In Sections 6 and 7 I briefly comment on,first, the relation of the definition to Quine''s later ideas about (stimulus)synonymy, and, second, its relation to a general, interlinguistic, conceptof meaning. (shrink)
In this paper I provide some formal schemas for the analysis of vague predicates in terms of a set of semantic relations other than classical synonymy, including weak synonymy (as between "large" and "huge"), antonymy (as between "large" and "small"), relativity (as between "large" and "large for a dog"), and a kind of supervenience (as between "large" and "wide" or "long"). All of these relations are representable in the simple comparative logic CL, in accordance with the basic formula: (...) the more something is F, the more (or less) it is G. I use Carnapian meaning postulates to define these relations as constraints on interpretations of the formal language of CL. (shrink)
In "The Epistemology of Geometry" Glymour proposed a necessary structural condition for the synonymy of two space-time theories. David Zaret has recently challenged this proposal, by arguing that Newtonian gravitational theory with a flat, non-dynamic connection (FNGT) is intuitively synonymous with versions of the theory using a curved dynamical connection (CNGT), even though these two theories fail to satisfy Glymour's proposed necessary condition for synonymy. Zaret allowed that if FNGT and CNGT were not equally well (bootstrap) tested by (...) the relevant phenomena, the two theories would in fact not be synonymous. He argued, however, that when electrodynamic phenomena are considered, the two theories are equally well tested. We show that it is not FNGT and CNGT which are equally well tested when the electrodynamic phenomena are considered, but only suitable extensions of FNGT and CNGT. Thus, there is good reason to consider FNGT and CNGT to be non-synonymous. We further show that the two extensions of FNGT and CNGT which are equally well tested when electrodynamic phenomena are considered (and which could be considered intuitively synonymous) not only satisfy Glymour's original proposed necessary condition for the synonymy of space-time theories, they satisfy a plausible stronger condition as well. (shrink)
In this note two notions of meaning are considered and accordingly two versions of synonymy are defined, weaker and stronger ones. A new semantic device is introduced: a matrix is said to be pragmatic iff its algebra is in fact an algebra of meanings in the stronger sense. The new semantics is proved to be universal enough (Theorem 1), and it turns out to be in some sense a generalization of Wójcicki's referential semantics (Theorem 3).
Two declarative sentences are synonymous if, and only if, the statements they can be used to make are. given certain assumptions about the truth or falsity of other statements, confirmed or disconfirmed to the same degree by the same evidence. This criterion of synonymy is Quinean in that it treats confirmation holistically. But unlike Quine's criterion of synonymy, it conforms to and explains our intuitions of sentence synonymy.
This book explores how some word meanings are paradigmatically related to each other, for example, as opposites or synonyms, and how they relate to the mental organization of our vocabularies. Traditional approaches claim that such relationships are part of our lexical knowledge (our "dictionary" of mentally stored words) but Lynne Murphy argues that lexical relationships actually constitute our "metalinguistic" knowledge. The book draws on a century of previous research, including word association experiments, child language, and the use of synonyms and (...) antonyms in text. (shrink)
William of Ockham's semantic theory was founded on the idea that thought takes place in a language not unlike the languages in which spoken and written communication occur. This mental language was held to have a number of features in common with everyday languages. For example, mental language has simple terms, not unlike words, out of which complex expressions can be constructed. As with words, each of these terms has some meaning, or signification; in fact Ockham held that the signification (...) of everyday words derives precisely from the signification of mental terms. Furthermore, the meaning of a mental expression depends directly on the meaning of its constituent terms, as is the case with expressions in more familiar languages. (shrink)
Quine claims that holism (i.e., the Quine-Duhem thesis) prevents us from defining synonymy and analyticity (section 2). In Word and Object, he dismisses a notion of synonymy which works well even if holism is true. The notion goes back to a proposal from Grice and Strawson and runs thus: R and S are synonymous iff for all sentences T we have that the logical conjunction of R and T is stimulus-synonymous to that of S and T. Whereas Grice (...) and Strawson did not attempt to defend this definition, I try to show that it indeed gives us a satisfactory account of synonymy. Contrary to Quine, the notion is tighter than stimulus-synonymy – particularly when applied to sentences with less than critical semantic mass (section 3). Now according to Quine, analyticity could be defined in terms of synonymy, if synonymy were to make sense: A sentence is analytic iff synonymous to self-conditionals. This leads us to the following notion of analyticity: S is analytic iff, for all sentences T, the logical conjunction of S and T is stimulus-synonymous to T; an analytic sentence does not change the semantic mass of any theory to which it may be conjoined (section 4). This notion is tighter than Quine's stimulus-analyticity; unlike stimulus-analyticity, it does not apply to those sentences from the very center of our theories which can be assented to come what may, even though they are not synthetic in the intuitive sense (section 5). Conclusion: We can have well-defined notions of synonymy and analyticity even if we embrace Quine's holism, naturalism, behaviorism, and radical translation. Quine's meaning skepticism is to be repudiated on Quinean grounds. (shrink)
Analyticity is a bogus explanatory concept, and is so even granting genuine synonomy. Definitions can't explain the truth of a statement, let alone its necessity and/or our a priori knowledge of it. The illusion of an explanation is revealed by exposing diverse confusions: e.g., between nominal, conceptual and real definitions, and correspondingly between notational, conceptual, and objectual readings of alleged analytic truths, and between speaking a language and operating a calculus. The putative explananda of analyticity are (alleged) truths about essential (...) properties. Real definitions (a la Socrates) are the (alleged) explananda, not the explanans of analyticity. Their truth can be explained neither by conceptual definitions (a la Kant), nor by nominal definitions (a la Frege). The Quinean assault on synonomy is unsuccessful and in any case misplaced, because analyticity turns on the explanatory import of synonomy, not its existence. Synonym substitution in a logical truth cannot yield a necessary truth for it doesn't preserve logical form. Self-identity statements (for properties and/or individuals) differ in logical form from alter-identity statements. (shrink)
If logical truth is necessitated by sheer syntax, mathematics is categorially unlike logic even if all mathematics derives from definitions and logical principles. This contrast gets obscured by the plausibility of the Synonym Substitution Principle implicit in conceptions of analyticity: synonym substitution cannot alter sentence sense. The Principle obviously fails with intercepting: nonuniform term substitution in logical sentences. 'Televisions are televisions' and 'TVs are televisions' neither sound alike nor are used interchangeably. Interception synonymy gets assumed because logical sentences and (...) their synomic interceptions have identical factual content, which seems to exhaust semantic content. However, intercepting alters syntax by eliminating term recurrence, the sole strictly syntactic means of ensuring necessary term coextension, and thereby syntactically securing necessary truth. Interceptional necessity is lexical, a notational artifact. The denial of interception nonsynonymy and the disregard of term recurrence in logic link with many misconceptions about propositions, logical form, conventions, and metalanguages. Mathematics is distinct from logic: its truth is not syntactic; it is transmitted by synonym substitution; term recurrence has no essential role. The '=' of mathematics is an objectual relation between numbers; the '=' of logic marks a syntactic relation of coreferring terms. (shrink)
In this paper it is shown how a formal theory of interpretation in Montague’s style can be reconciled with a view on meaning as a social construct. We sketch a formal theory in which agents can have their own theory of interpretation and in which groups can have common theories of interpretation. Frege solved the problem how different persons can have access to the same proposition by placing the proposition in a Platonic realm, independent from all language users but accessible (...) to all of them. Here we explore the alternative of letting meaning be socially constructed. The meaning of a sentence is accessible to each member of a linguistic community because the way the sentence is to be interpreted is common knowledge among the members of that community. Misunderstandings can arise when the semantic knowledge of two or more individuals is not completely in sync. (shrink)
This paper introduces the notion of syntactic feature to provide a unified treatment of earlier model theoretic proofs of both the compactness and interpolation theorems for a variety of two valued logics including sentential logic, first order logic, and a family of modal sentential logic includingM,B,S 4 andS 5. The compactness papers focused on providing a proof of the consequence formulation which exhibited the appropriate finite subset. A unified presentation of these proofs is given by isolating their essential feature and (...) presenting it as an abstract principle about syntactic features. The interpolation papers focused on exhibiting the interpolant. A unified presentation of these proofs is given by isolating their essential feature and presenting it as a second abstract principle about syntactic features. This second principle reduces the problem of exhibiting the interpolant to that of establishing the existence of a family of syntactic features satisfying certain conditions. The existence of such features is established for a variety of logics (including those mentioned above) by purely combinatorial arguments. (shrink)
In Word and Object, Quine argues from the observation that ?there is no justification for collating linguistic meanings, unless in terms of men's dispositions to respond overtly to socially observable stimulations? to the conclusion that ?the enterprise of translation is found to be involved in a certain systematic indeterminacy?. In this paper, I propose to show (1) that Quine's thesis, when properly understood, reveals in the situation of translation no peculiar indeterminacy but merely the ordinary indeterminacy present in any case (...) of empirical investigation; (2) that it is plausible that, because the subject of inquiry is language, we are in a better position with respect to such empirical indeterminacies than we are in other areas of investigation; (3) that, in any case, Quine's arguments are impotent, for they are either contradictory or incoherent; and (4) that Quine is led to his radical conclusions because he confuses a trivial and unexciting indeterminacy, which does obtain, with the striking indeterminacy for which he argues, which does not obtain. (shrink)
This paper addresses the high sonic demands of alliterative metres, and the consequences of these demands for sense: the semantic stretching of common words and the deployment of uncommon (archaic, ‘poetic’) words. The notion of alliterative rank is discussed as an indicator of such consequences (examples are given from English and Estonian verse) and the range of onsets found for synonyms of key notions in verse traditions is remarked upon.
Critique of Alonzo Church's Translation Test. Church's test is based on a common misconception of the grammar of (so-called) quotations. His conclusion (that metalogical truths are actually contingent empirical truths) is a reductio of that conception. Chruch's argument begs the question by assuming that translation must preserve reference despite altering logical form of statements whose truth is explained by their form.
A new kind of defense of the Millian theory of names is given, which explains intuitive counter-examples as depending on pragmatic effects of the relevant sentences, by direct application of Grice’s and Sperber and Wilson’s Relevance Theory and uncontroversial assumptions. I begin by arguing that synonyms are always intersubstitutable, despite Mates’ considerations, and then apply the method to names. Then, a fairly large sample of cases concerning names are dealt with in related ways. It is argued that the method, as (...) applied to the various cases, satisfies the criterion of success: that for every sentence in context, it is a counter-example to Millianism to the extent that it has pragmatic effects (matching speakers’ intuitions). (shrink)
The relation between linguistics and logic has been discussed in a, recent paper by Bar-Hillel} where it is argued that a disregard for workin logical syntax and semantics has caused linguists to limit themselves too narrowly in their inquiries, and to fall into several errors. In particular, Bar-Hillel asserts, they have attempted to derive relations of synonymy and so-called ‘rules of transfOI`1'Il8.tiOH,, such as the active—pussive relation, from distributional studies alone, and they have hesitated to rely on considerations of (...) meaning in linguistic analysis. No one can quarrel with the suggestion that linguists interest themselves in meaning or transformation rules, but the relevance of logical syntax and semsmticsz (at least as we now know them) to this study is very dubious. I think that a closer investigation of the assumptions and concems of logical syntax and semantics will show that the hope of applying the results which have been achieved in these fields to the solution of linguistic problems is illusory. (shrink)
The paper criticizes epistemological conceptions of analytic or conceptual truth, on which assent to such truths is a necessary condition of understanding them. The critique involves no Quinean scepticism about meaning. Rather, even granted that a paradigmatic candidate for analyticity is synonymy with a logical truth, both the former and the latter can be intelligibly doubted by linguistically competent deviant logicians, who, although mistaken, still constitute counterexamples to the claim that assent is necessary for understanding. There are no analytic (...) or conceptual truths in the epistemological sense. The critique is extended to purportedly analytic inference rules. An alternative account is sketched on which understanding a word is a matter of participation in a linguistic practice, while synonymy and concept identity consist in sameness of truth-conditional semantic properties. Although there are philosophical questions about concepts, the idea that philosophical questions in general are conceptual questions generates only an illusion of insight into philosophical methodology. (shrink)
In this paper, we address several puzzles concerning speech acts,particularly indirect speech acts. We show how a formal semantictheory of discourse interpretation can be used to define speech actsand to avoid murky issues concerning the metaphysics of action. Weprovide a formally precise definition of indirect speech acts, includingthe subclass of so-called conventionalized indirect speech acts. Thisanalysis draws heavily on parallels between phenomena at the speechact level and the lexical level. First, we argue that, just as co-predicationshows that some words can (...) behave linguistically as if they're `simultaneously'of incompatible semantic types, certain speech acts behave this way too.Secondly, as Horn and Bayer (1984) and others have suggested, both thelexicon and speech acts are subject to a principle of blocking or ``preemptionby synonymy'': Conventionalized indirect speech acts can block their`paraphrases' from being interpreted as indirect speech acts, even ifthis interpretation is calculable from Gricean-style principles. Weprovide a formal model of this blocking, and compare it withexisting accounts of lexical blocking. (shrink)
Category mistakes are sentences such as ‘Colourless green ideas sleep furiously’ or ‘The theory of relativity is eating breakfast’. Such sentences are highly anomalous, and this has led a large number of linguists and philosophers to conclude that they are meaningless (call this ‘the meaninglessness view’). In this paper I argue that the meaninglessness view is incorrect and category mistakes are meaningful. I provide four arguments against the meaninglessness view: in Sect. 2, an argument concerning compositionality with respect to category (...) mistakes; in Sect. 3 an argument concerning synonymy facts of category mistakes; in Sect. 4 concerning embeddings of category mistakes in propositional attitude ascriptions; and in Sect. 5 concerning the uses of category mistakes in metaphors. Having presented these arguments, in Sect. 6 I briefly discuss some of the positive motivations for accepting the meaninglessness view and argue that they are unconvincing. I conclude that the meaninglessness view ought to be rejected. (shrink)
Lexical Semantics is about the meaning of words. Although obviously a central concern of linguistics, the semantic behaviour of words has been unduly neglected in the current literature, which has tended to emphasize sentential semantics and its relation to formal systems of logic. In this textbook D. A. Cruse establishes in a principled and disciplined way the descriptive and generalizable facts about lexical relations that any formal theory of semantics will have to encompass. Among the topics covered in depth are (...) idiomaticity, lexical ambiguity, synonymy, hierarchical relations such as hyponymy and meronymy, and various types of oppositeness. Syntagmatic relations are also treated in some detail. The discussions are richly illustrated by examples drawn almost entirely from English. Although a familiarity with traditional grammar is assumed, readers with no technical linguistic background will find the exposition always accessible. All readers with an interest in semantics will find in this original text not only essential background but a stimulating new perspective on the field. (shrink)
Quine famously argued that analyticity is indefinable, since there is no good account of analyticity in terms of synonymy, and intensions are of no help since there are no intensions. Yet if there are intensions, the question still remains as to the right account of analyticity in terms of them. On the assumption that intensions must be admitted, the present paper considers two such accounts. The first analyzes analyticity in terms of concept identity, and the second analyzes analyticity in (...) terms of the analysis relation. The first fails in light of possible counterexamples. The second is defended, both by considering test cases of intuitively clear analyticities, and by developing the account in light of possible counterexamples. (shrink)
In Naming and Necessity Kripke accuses Frege of conflating two notions of meaning (or sense), one is meaning proper, the other is determining of reference (p. 59). More precisely, Kripke argues that Frege conflated the question of how the meaning of a word is given or determined with the question of how its reference is determined. The criterial mark of meaning determination, according to Kripke, is a statement of synonymy: if we give the sense of “a” by means of (...) “b”, then the two expressions must be synonymous. The criterial mark of reference-determination is knowledge, typically a priori, of the truth of their identity: If the reference of “a” is given by “b”, then we know a priori that a is b. Kripke then argues that Frege’s conceptions of both meaning-determination and of reference determination were wrong, and proposes an alternative picture of reference determination. (shrink)
I advance what might be thought a paradoxical thesis: that the central topic of Hume’s long discussions “Of the Idea of Necessary Connexion” is not, in fact, the idea of necessary connexion. However it is not as paradoxical as it first appears, for I shall claim that the “idea” whose origin Hume seeks is, in a sense, an idea-type of which the specific idea of necessary connexion is but one instance. Various lines of evidence support this claim, but my main (...) argument will rest on its ability to solve four puzzles in Hume’s text, which are otherwise hard to explain away. These are: (S) the synonymy puzzle, posed by Hume’s apparently reckless assertion that “efficacy”, “agency”, “power”, “force”, “energy”, “necessity”, “connexion”, and “productive quality” are all virtual synonyms; (C) the complexity puzzle, that Hume seems to ignore the possibility that his target idea might be complex rather than simple; (V) the vulgar problem, which arises from Hume’s acknowledgement that the vulgar believe in “chancy” causes, even though he takes the very concept of causation to involve necessity; and (P) the probability problem, of how an allegedly simply idea whose central core involves inexorable necessity could possibly provide a basis for probability. The paper ends by drawing further support from an analysis of Hume’s two sections “Of the idea of necessary connexion”, showing that his use of the various relevant terms makes good sense on the thesis proposed, thus corroborating the arguments presented. (shrink)
The traditional understanding of analyticity in terms of concept containment is revisited, but with a concept explicitly understood as a certain kind of mental representation and containment being read correspondingly literally. The resulting conception of analyticity avoids much of the vagueness associated with attempts to explicate analyticity in terms of synonymy by moving the locus of discussion from the philosophy of language to the philosophy of mind. The account provided here illustrates some interesting features of representations and explains, at (...) least in part, the special epistemic status of analytic judgments. (shrink)
The paper deals with the semantics and ontology of ordinary discourse about properties. The main focus lies on the following thesis: A simple predication of the form ‘a is F’ is synonymous with the corresponding explicit property-attribution ‘a has F-ness’. An argument against this Synonymy Thesis is put forth which is based on the thesis that simple predications and property-attributions differ in their conditions of understanding. In defending the argument, the paper accounts for the way in which we come (...) to adopt the conceptual framework of properties. (shrink)
Though largely unnoticed, in “Two Dogmas” Quine (1951, Two Dogmas of Empiricism, Philosophical Review 60, 20–43. Reprinted in From a Logical Point of View, 20–46) himself invokes a distinction: a distinction between logical and analytic truths. Unlike analytic statements equating ‘bachelor’ with ‘unmarried man’, strictly logical tautologies relating two word-tokens of the same word-type, e.g., ‘bachelor’ and ‘bachelor’ are true merely in virtue of basic phonological form, putatively an exclusively non-semantic function of perceptual categorization or brute stimulus behavior. Yet natural (...) language phonemic categorization is not entirely free of interpretive semantic considerations. “Phonemic reductionism” in both its linguistic (Bloch 1953, Contrast, Language 29, 59–61) and behavioral (Quine 1990, The Phoneme’s Long Shadow, Emics and Etics: The Insider/Outsider Debate, T. Headland, K. Pike and M. <span class='Hi'>Harris</span>, (eds.), Newbury Park, CA, Sage Publications, 164–167) guise is false. The semantic basis of phonological equivalence, however, has repercussions vis-à-vis Quine’s critique of analyticity. A consistent rejection of meaning-based equivalencies eliminates not only analyticity, but imposes a form of phonological eliminativism too. Phonological eliminativism is the reductio result of applying Quinean meaning skepticism to the phonological typing of natural language. But unlike analyticity, phonology is presumably not subject to philosophical dismissal. The semantic basis of natural language phonology serves to neutralize Quine’s argument against analyticity: without the semantics of meaning, more than just synonymy is lost; basic phonology must also be forfeited. Let’s begin with the fact that even Quine has to admit that it is possible for two tokens of the same orthographic type to be synonymous, for that much is presupposed by his own account of logical truth. Paul Boghossian (1999, 343). (shrink)
The notion of homonymy has been of perennial philosophical interest to scholars of Aristotle from ancient Greek commentators to modern thinkers. Across historical periods, certain issues have remained central, such as the nature of Aristotelian homonymy, its relation to synonymy and analogy, and whether the concept undergoes change throughout the corpus. In addition, fundamental questions concerning the use of homonymy in regard to dialectical practice and scientific inquiry are raised and discussed. It is argued that there are two aspects (...) to Aristotelian homonymy, negative and positive in function, which provide complementary roles in regard to dialectic and science. (shrink)
A variety of legal documents are increasingly being made available in electronic format. Automatic Information Search and Retrieval algorithms play a key role in enabling efficient access to such digitized documents. Although keyword-based search is the traditional method used for text retrieval, they perform poorly when literal term matching is done for query processing, due to synonymy and ambivalence of words. To overcome these drawbacks, an ontological framework to enhance the user’s query for retrieval of truly relevant legal judgments (...) has been proposed in this paper. Ontologies ensure efficient retrieval by enabling inferences based on domain knowledge, which is gathered during the construction of the knowledge base. Empirical results demonstrate that ontology-based searches generate significantly better results than traditional search methods. (shrink)
In this essay I present a statement of Quine's indeterminacy thesis in its general form. It is shown that the thesis is not about difficulties peculiar to so-called "radical translation." It is a general thesis about meaning and reference with important consequences for any theory of our theories and beliefs. It is claimed that the thesis is inconsistent with Quine's realism, his doctrine of the relativity of reference, and that the argument for the thesis has the consequence that the concept (...) of stimulus meaning is empty. The sense in which linguistic science, as a branch of behavioral science, is "part of physics" is discussed. An alternative to Quine's view of the nature and content of linguistic science is proposed. It is shown to be consistent with Quine's assumptions concerning the legitimate scope of behavioral science and not to involve the notions of analyticity, synonymy and "prevalent attitudes toward meaning, idea and proposition" (, p. 304) rejected by Quine. (shrink)
Grice and Strawson's 'In Defense of a Dogma is admired even by revisionist Quineans such as Putnam (1962) who should know better. The analytic/synthetic distinction they defend is distinct from that which Putnam successfully rehabilitates. Theirs is the post-positivist distinction bounding a grossly enlarged analytic. It is not, as they claim, the sanctified product of a long philosophic tradition, but the cast-off of a defunct philosophy - logical positivism. The fact that the distinction can be communally drawn does not show (...) that it is based on a real difference. Subcategories that can be grouped together by enumeration will do the trick. Quine's polemical tactic (against which Grice and Strawson protest) of questioning the intelligibility of the distinction is indeed objectionable, but his argument can be revived once it is realized that 'analytic' et al. are theoretic terms, and there is no extant theory to make sense of them. Grice and Strawson's paradigm of logical impossibility is, in fact, possible. Their attempt to define synonymy in Quinean terms is a failure, nor can they retain analyticity along with the Quinean thesis of universal revisability. The dogma, in short, is indefensible. (shrink)
This article provides the first comprehensive reconstruction and analysis of Hintikka’s attempt to obtain a measure of the information yield of deductive inferences. The reconstruction is detailed by necessity due to the originality of Hintikka’s contribution. The analysis will turn out to be destructive. It dismisses Hintikka’s distinction between surface information and depth information as being of any utility towards obtaining a measure of the information yield of deductive inferences. Hintikka is right to identify the failure of canonical information theory (...) to give an account of the information yield of deductions as a scandal, however this article demonstrates that his attempt to provide such an account fails. It fails primarily because it applies to only a restricted set of deductions in the polyadic predicate calculus, and fails to apply at all to the deductions in the monadic predicate calculus and the propositional calculus. Some corollaries of these facts are a number of undesirable and counterintuitive results concerning the proposed relation of linguistic meaning (and hence synonymy) with surface information. Some of these results will be seen to contradict Hintikka’s stated aims, whilst others are seen to be false. The consequence is that the problem of obtaining a measure of the information yield of deductive inferences remains an open one. The failure of Hintikka’s proposal will suggest that a purely syntactic approach to the problem be abandoned in favour of an intrinsically semantic one. (shrink)
This paper presents a tree method for testing the validity of inferences, including syllogisms, in a simple term logic. The method is given in the form of an algorithm and is shown to be sound and complete with respect to the obvious denotational semantics. The primitive logical constants of the system, which is indebted to the logical works of Jevons, Brentano and Lewis Carroll, are term negation, polyadic term conjunction, and functors affirming and denying existence, and use is also made (...) of a metalinguistic concept of formal synonymy. It is indicated briefly how the method may be extended to other systems. (shrink)
Nelson Goodman's proposal for a reconception of meaning consists in replacing the absolute notion ofsameness of meaning by that oflikeness of meaning (with respect to pertinent contexts). According to this view, synonymy is a matter of degree (of interreplaceability) with identity of expression as a limiting case. Goodman's demonstration that no two expressions are exactly alike in meaning is shown to be unsuccessful. Although it does not make use of quotational contexts for the test of interreplaceability, it is tantamount (...) to their acceptance. Goodman rejects quotational contexts; I argue that they should be accepted. This move offers two advantages.Firstly, and mainly, it allows interlinguistic comparison of meaning, something that has not been deemed possible in the received version of Goodman's account.Secondly, it restores the full scale of likeness of meaning damaged by the renunciation of those contexts that guarantee difference in meaning for diverse expressions. (shrink)
Quine claims that holism (i.e., the Quine-Duhem thesis) prevents us from defining synonymy and analyticity (section 2). In "Word and Object," he dismisses a notion of synonymy which works well even if holism is true. The notion goes back to a proposal from Grice and Strawson and runs thus: R and S are synonymous iff for all sentences T we have that the logical conjunction of R and T is stimulus-synonymous to that of S and T. Whereas Grice (...) and Strawson did not attempt to defend this definition, I try to show that it indeed gives us a satisfactory account of synonymy. Contrary to Quine, the notion is tighter than stimulus-synonymy -- particularly when applied to sentences with less than critical semantic mass (section 3). Now according to Quine, analyticity could be defined in terms of synonymy, if synonymy were to make sense: A sentence is analytic iff synonymous to self-conditionals. This leads us to the following notion of analyticity: S is analytic iff, for all sentences T, the logical conjunction of S and T is stimulus-synonymous to T; an analytic sentence does not change the semantic mass of any theory to which it may be conjoined (section 4). This notion is tighter than Quine's stimulus-analyticity; unlike stimulus-analyticity, it does not apply to those sentences from the very center of our theories which can be assented to come what may, even though they are not synthetic in the intuitive sense (section 5). (shrink)
This article is devoted to the question: does the Duhemian argument support the position taken by those contemporary philosophers who--like W. V. O. Quine and M. White--reject the distinction between analytic and synthetic statements? The term "Duhemian argument" is used to refer to the following statement: it is impossible to put to the test one isolated empirical statement; testing empirical statements involves testing a whole group of hypotheses. An analysis of the logical structure of reductive reasoning leads to the conclusion (...) that the Duhemian argument is valid and that it entails the following statements: (1)--experience alone cannot compel us absolutely to the acceptance of any isolated empirical statement whatsoever, independently of our acceptance or rejection of some other statements, and (2)--no isolated empirical statement can be conclusively falsified by experience, independently of our acceptance or rejection of some other statements. The Duhemian argument seems then to establish conclusively the cogency of the claim that, in principle, it is possible to reject or to maintain any particular empirical statement, provided we make appropriate changes in the system of hypotheses which is put to test. The philosophers who reject the distinction between analytic and synthetic statements--in particular Quine--claim that the same line of reasoning supports their contention. It is alleged that: (1)--the Duhemian argument makes impossible a definition of statement synonymy and, consequently, a definition of analyticity in terms of synonymy, and (2)--that the unit of empirical significance is the whole of science or the total science, and (3)--that it is a folly to seek a boundary between synthetic and analytic statements, because all our statements are equally open to revision. The article tries to show that these conclusions do not follow from the Duhemian argument. In particular it is shown: (1)--that the Duhemian argument does not exclude the definition of statement synonymy, (2)--that this argument does not support the contention that the enigmatic entity called "the whole of science" or the "total science" is involved in each and every testing procedure, (3)--that the principle of fundamental revisability of every statement does not change the fact that in scientific practice the situation is never so hopeless as the Duhemian argument seems to imply, because even inconclusive arguments may differ in their adequacy, and (4)--that the term "revision" is ambiguous and only this ambiguity lends an air of plausibility to Quine's formulations. The conclusion is that the Duhemian line of reasoning does not support the contention of philosophers who reject the distinction between analytic and synthetic statements. (shrink)
A certain direction in cognitive science has been to try to “ground” public language statements in some species of mental representation. A central tenet of this trend is that communication – that is, public language – succeeds (when it does) because the elements of this public language are in some way correlated with mental items of both the speaker and the audience so that the mental state evoked in the audience by the use of that piece of public language is (...) the one that the speaker wanted to evoke. The “meaning”, therefore, of an utterance – and of the parts of an utterance, such as individual sentences and their parts, the individual words, etc. – is, in this view, some mental item. Successful communication requires that there be widespread agreement amongst speakers of the same public language as to the mental entities that are correlated with any particular public words. Such a view of meaning is variously called “internalist” or “cognitive” or “subjectivist” or “solipsistic” or (sometimes) “representationalist” (these terms having, however, further connotations which set them apart from one another in other ways), and can be found in a wide variety of writers who do not agree on many other things. It is opposed to views that take the meaning of an utterance to be an item of “reality,” however defined. In different writers this latter view is called “externalist” or “objectivist” or “realist” or (sometimes) “represent-ationalist,” always with the idea that there is something other (or at least, more) than the mental state of speakers and hearers that determines meaning. The literature is rife with arguments between internalists vs. externalists, subjectivists vs. objectivists, cognitivists vs. realists, on such topics as “truth” and “synonymy” and “twin earth” and “arthritis” (to mention only a few).. (shrink)
A version of the so?called paradox of analysis is enunciated which involves two principles of synonymy, referred to respectively as that of substitution and that of triviality. It is argued that for most ?familiar? concepts of synonymy the former principle can be maintained whereas the latter one has to be rejected. I deal with some solutions to the paradox that have been proposed or discussed by Carnap, Lewy, Feyerabend and Hare, and adhere to Carnap's view that the puzzle (...) arises from the use of unclarified and imprecise notions of synonymy. (shrink)
The genetic code has evolved from its initial non-degenerate wobble version until reaching its present state of degeneracy. By using the stereochemical hypothesis, we revisit the problem of codon assignations to the synonymy classes of amino-acids. We obtain these classes with a simple classifier based on physico-chemical properties of nucleic bases, like hydrophobicity and molecular weight. Then we propose simple RNA (or more generally XNA, with X for D, P or R) ring structures that present, overlap included, one and (...) only one codon by synonymy class as solutions of a combinatory variational problem. We compare these solutions to sequences of present RNAs considered as relics, with a high interspecific invariance, like invariant parts of tRNAs and micro-RNAs. We conclude by emphasizing some optimal properties of the genetic code. (shrink)
This paper proposes a new architecture for textual inference in which ﬁnding a good alignment is separated from evaluating entailment. Current approaches to semantic inference in question answering and textual entailment have approximated the entailment problem as that of computing the best alignment of the hypothesis to the text, using a locally decomposable matching score. While this formulation is adequate for representing local (word-level) phenomena such as synonymy, it is incapable of representing global interactions, such as that between verb (...) negation and the addition/removal of qualiﬁers, which are often critical for determining entailment. We propose a pipelined approach where alignment is followed by a classiﬁcation step, in which we extract features representing high-level characteristics of the entailment problem, and give the resulting feature vector to a statistical classiﬁer trained on development data. (shrink)
Computational semantics is the study of how to represent meaning in a way that computers can use. For the authors of this textbook, this study includes the representation of the meaning of natural language in logic formalisms, the recognition of certain relations that hold within this formalization (such as synonymy, consistency, and implication), and the computational implementation of all this. I think that, while there probably are not many courses devoted to computational semantics, this book could profitably be incorporated (...) into more traditional computational linguistics courses, especially when two courses are offered serially. The material here could be spread out and integrated into parts of a more standard pair of these courses, and it would result in a substantial widening of the knowledge that students come away with from these courses. (shrink)
The application of semantical concepts such as synonymy and interpretation to actual situations of usage gives rise to perplexing problems. One of the few attempts to tackle these problems has been carried out by Arne Naess. Further advances along this line may become possible after a clarification of the basic concepts employed. The discussion centers around empirical synonymy and certain other notions built on this concept by Naess. Possible ways of making the system coherent are indicated.
John Mair (1467–1550) was an influential post-medieval scholar. This paper focuses on his Tractatus insolubilium, in which he proposed semantic analysis of self-referential phenomena, in particular on his solution to alethic and correspondence paradoxes and his treatment of their general semantic aspects as well as particular applications. His solution to paradoxes is based on the so-called “network evaluation”, i.e. on a semantics which defines the concepts of truth and correspondence with reality in contextual terms. Consequently, the relation between semantic valuation, (...)synonymy and contradiction must be redefined. (shrink)
cal practice: the enterprise of specifying information about the world for use in computer systems. Knowledge representation as a ﬁeld also encompasses conceptual results that call practitioners’ attention to important truths about the world, mathematical results that allow practitioners to make these truths precise, and computational results that put these truths to work. This chapter surveys this practice and its results, as it applies to the interpretation of natural language utterances in implemented natural language processing systems. For a broader perspective (...) on such technical practice, in all its strengths and weaknesses, see (Agre 1997). Knowledge representation offers a powerful general tool for the science of language. Computational logic, a prototypical formalism for representing knowledge about the world, is also the model for the level of logical form that linguists use to characterize the grammar of meaning (Larson and Segal 1995). And researchers from (Schank and Abelson 1977) to (Shieber 1993) and (Bos to appear) have relied crucially on such representations, and the inference methods associated with them, in articulating accounts of semantic relations in language, such as synonymy, entailment, informativeness and contradiction. The new textbooks (Blackburn and Bos 2002a, Blackburn and Bos 2002b) provide an excellent grounding in this research, and demonstrate how deeply computational ideas from knowledge representation can inform pure linguistic study. In this short chapter, I must leave much of.. (shrink)
After an introduction which demonstrates the failure of the equational analogue of Beth?s definability theorem, the first two sections of this paper are devoted to an elementary exposition of a proof that a functional constant is equationally definable in an equational theory iff every model of the set of those consequences of the theory that do not contain the functional constant is uniquely extendible to a model of the theory itself.Sections three, four and five are devoted to applications and extensions (...) of this result.Topics considered here include equational definability in first order logic, an extended notion of definability in equational logic and the synonymy of equational theories.The final two sections briefly review some of the history of equational logic. (shrink)
Concentrating on the legacy of David Hume, I discuss the impact of his psychologism on his two most important sharp distinctions: (1) between statements about the relations of ideas and those about matters of fact; and (2) between what is and what ought to be. I argue that his concept of relations of ideas is subject to difficulties like those attending the concept of synonymy in twentieth-century discussions, and also that his psychologism should lead him to say that (1) (...) is not a sharp distinction. I then raise the more difficult question of whether Hume would have said, as Quine does, that normative epistemology is an empirical science but that normative ethics is not. Finally, I discuss the difficulty of presenting naturalistic support for the claim that a scientific theory ought to predict successfully, be comparatively simple, and respect older truths in some degree. (shrink)
Machine generated contents note: Preface; Acknowledgements; 1. Conventionalism and the linguistic doctrine of logical truth; 2. Analyticity and synonymy; 3. The indeterminacy of translation; 4. Ontological relativity; 5. Criticisms and extensions; Concluding remarks: conventionalism and implications; Bibliography; Index.
Self-reference suffices to define performative sentences. “I say (or its variants) that” is communicatively functional. By it, speaker shows he is aware of how he is being seen by hearers. Therefore “I order” and “I do not order” are equally performative, though the latter does not perform any activity. This is our first proposal. In the subdivision (criterion of activity can do nothing but a subdivision), “I do not permit” isactive. Our second proposal explains that anomaly attending to synonymy (...) between “I do not permit” and “I forbid”, but without using it as a premature escape. (shrink)
W.V.O. Quine has famously objected that (1) properties are philosophically suspect because (2) there is no entity without identity and (3) the synonymy criterion for property identity won't do because there's no such concept as synonymy. (2) and (3) may or may not be right but do not prove (1). I reply that Leiniz's Law handles property identity, as it does for everything else, then respond to a variety of objections and confusions.
Two dogmas of empiricism, by W. V. Quine.--In defense of a dogma, by H. P. Grice and P. F. Strawson.--The analytic and the synthetic: an untenable dualism, by M. G. White.--Synonymity, by B. Mates.--The meaning of a word, by J. L. Austin.--Meaning and synonymy in natural languages, by R. Carnap.--Analytic-synthetic, by J. Bennett.--On "analytic," by R. M. Martin.--Selected bibliography (p. -196).
It is, I suppose, a truism that an adequate theory of meaning for a natural language L will associate each sentence of L with its meaning. But the converse does not hold. A theory that associates each sentence with its meaning is not, by virtue of that fact, an adequate theory of meaning. For it is also a truism that a semantic theory should explain the (interesting and explicable) semantic facts. And one cannot decree that the relevant facts are all (...) reportable with instances of schemata like ‘S means that p’ or ‘S, by virtue of its meaning, is true iff p’. Investigation suggests that there is much more for semanticists to explain: natural languages exhibit synonymies, ambiguities, and entailments; for any string of words, there are endlessly many meanings it cannot have; there are semantic generalizations, including crosslinguistic generalizations, that go uncaptured and unexplained by merely associating sentences with their meanings; etc. Initially, one might think these facts are “peripheral” and can thus be ignored if the aim is to explain why sentences mean what they do. But the study of natural language suggests otherwise. (One can’t tell, in advance of investigation, which facts are peripheral to a given domain. It was initially tempting to think that one could ignore falling bodies, and the tides, if the aim was to explain why planets move as they do.). (shrink)