This paper offers both a theoretical and an experimental perspective on the relationship between connectionist and Classical (symbol-processing) models. Firstly, a serious flaw in Fodor and Pylyshyn’s argument against connectionism is pointed out: if, in fact, a part of their argument is valid, then it establishes a conclusion quite different from that which they intend, a conclusion which is demonstrably false. The source of this flaw is traced to an underestimation of the differences between localist and distributed representation. It has (...) been claimed that distributed representations cannot support systematic operations, or that if they can, then they will be mere implementations of traditional ideas. This paper presents experimental evidence against this conclusion: distributed representations can be used to support direct structure-sensitive operations, in a man- ner quite unlike the Classical approach. Finally, it is argued that even if Fodor and Pylyshyn’s argument that connectionist models of compositionality must be mere implementations were correct, then this would still not be a serious argument against connectionism as a theory of mind. (shrink)
Primarily a response to Paul Horwich's "Composition of Meanings", the paper attempts to refute his claim that compositionality—roughly, the idea that the meaning of a sentence is determined by the meanings of its parts and how they are there combined—imposes no substantial constraints on semantic theory or on our conception of the meanings of words or sentences. Show Abstract.
Concepts are the constituents of thoughts. Therefore, concepts are vital to any theory of cognition. However, despite their widely accepted importance, there is little consensus about the nature and origin of concepts. Thanks to the work of Lawrence Barsalou, Jesse Prinz and others concept empiricism has been gaining momentum within the philosophy and psychology literature. Concept empiricism maintains that all concepts are copies, or combinations of copies, of perceptual representations—that is, all concepts are couched in the codes of perceptual representation (...) systems. It is widely agreed that any satisfactory theory of concepts must account for how concepts semantically compose (the compositionality requirement) and explain how their intentional content is determined (the content determination requirement). In this paper, I argue that concept empiricism has serious problems satisfying these two requirements. Therefore, although stored perceptual representations may facilitate some traditionally conceptual tasks, concepts should not be identified with copies of perceptual representations. (shrink)
I argue that compositionality (in the sense of homomorphic interpretation) is compatible with radical and pervasive contextual effects on interpretation. Apparent problems with this claim lose their force if we are careful in distinguishing the question of how a grammar assigns interpretations from the question of how people figure out which interpretations the grammar assigns. I demonstrate, using a simple example, that this latter task must sometimes be done not by computing a derivation defined directly by the grammar, but (...) through the use of pragmatic background knowledge and extragrammatical reasoning, even when the grammar is designed to be fully compositional. The fact that people must sometimes use global pragmatic mechanisms to identify truth conditions therefore tells us nothing about whether the grammar assigns truth conditions compositionally. Compositional interpretation (or the lack thereof) is identifiable not by the mechanisms necessary to calculating truth conditions, but by the structural relation between the interpretation of a phrase in context and the interpretations of its parts in that same context. Even if this relation varies by context, an invariant grammar is possible if grammars can “invoke” pragmatic concepts; but this does not imply that grammatical theory must explain these concepts or incorporate a theory of pragmatics. (shrink)
Starting from the familiar observation that no straightforward treatment of pure quotation can be compositional in the standard (homomorphism) sense, we introduce general compositionality, which can be described as compositionality that takes linguistic context into account. A formal notion of linguistic context type is developed, allowing the context type of a complex expression to be distinct from those of its constituents. We formulate natural conditions under which an ordinary meaning assignment can be non-trivially extended to one that is (...) sensitive to context types and satisfies general compositionality. As our main example we work out a Fregean treatment of pure quotation, but we also indicate that the method applies to other kinds of context, e.g. intensional contexts. (shrink)
There are two principles which bear the name Frege''sprinciple: the principle of compositionality, and the contextprinciple. The aim of this contribution is to investigate whether thisis justified: did Frege accept both principles at the same time, did hehold the one principle but not the other, or did he, at some moment,change his opinion? The conclusion is as follows. There is a developmentin Frege''s position. In the period of Grundlagen he followed to a strict form of contextuality. He repeatedcontextuality in (...) later writings, but became less strict. From 1914 on,pushed by the needs of research, he comes close to compositionality. Buthe could never make the final step toward compositionality forprincipled reasons, therefore he always would reject compositionality. (shrink)
Ordinary semantic compositionality (meaning of whole determined from meanings of parts plus composition) can serve to explain how a hearer manages to assign an appropriate meaning to a new sentence. But it does not serve to explain how the speaker manages to find an appropriate sentence for expressing a new thought. For this we would need a principle of inverse compositionality, by which the expression of a complex content is determined by the expressions of it parts and the (...) mode of composition. But this presupposes that contents have constituent structure, and this cannot be taken for granted. However, it can be proved that if a certain principle of substitutivity is valid for a particular language, then the meanings expressed by its sentences can justifiably be treated as structured. In its simplest form, this principle says that if in a complex expression a constituent is replaced by another constituent with a different meaning, the new complex has a meaning different from the original. This principle is again inversely related to the normal compositional principle of substitutivity. The combination of ordinary and inverse compositionality is here called 'strong compositionality'. The proof is carried out in the algebraic framework developed by Wilfrid Hodges and Dag Westerståhl. (shrink)
The standard argument against the compositionality of adjective-noun compounds containing "red" says that "red" does not make the same semantic contribution because a red car has to be red outside whereas a red watermelon has to be red inside. Fodor's reply to that argument is that the inside/outside feature is semantically irrelevant because "red F" just means F which is red for Fs. That account agrees with our intuitions concerning analyticity; but it seems to be in conflict with a (...) central test for understanding: a person who knows nothing else about these expressions than what is offered by Fodor is far from applying them successfully. (shrink)
The present paper studies the general implications of theprinciple of compositionality for the organization of grammar.It will be argued that Janssen''s (1986) requirement that syntax andsemantics be similar algebras is too strong, and that the moreliberal requirement that syntax be interpretable into semanticsleads to a formalization that can be motivated and applied more easily,while it avoids the complications that encumber Janssen''s formalization.Moreover, it will be shown that this alternative formalization evenallows one to further complete the formal theory of (...) class='Hi'>compositionality, inthat it is capable of clarifying the role played by translation,model-theoretic interpretation and meaning postulates,of which the latter two aspects received little or no attention inMontague (1970) and Janssen (1986). (shrink)
It has been argued that prototypes cannot compose, and that for this reason concepts cannot be prototypes (Osherson and Smith in Cognition 9:35–58, 1981; Fodor and Lepore in Cognition 58:253–270, 1996; Connolly et al. in Cognition 103:1–22, 2007). In this paper I examine the intensional and extensional approaches to prototype compositionality, arguing that neither succeeds in their present formulations. I then propose a hybrid extensional theory of prototype compositionality, according to which the extension of a complex concept is (...) determined as a function of what triggers its constituent prototypes. I argue that the theory escapes the problems traditionally raised against extensional theories of compositionality. (shrink)
It has been argued that philosophers that base their theories of meaning on communicative intentions and language conventions cannot accommodate the fact that natural languages are compositional. In this paper I show that if we pay careful attention to Grice's notion of “resultant procedures” we see that this is not the case. The argument, if we leave out all the technicalities, is fairly simple. Resultant procedures tell you how to combine utterance parts, like words, into larger units, like sentences. You (...) cannot have that unless you have R-correlations (reference) and D-correlations (denotation). These in turn, the argument goes, depend on communicative intentions, since without communicative intentions any attempt to R-correlate or D-correlate a word with an object or sets of objects would inevitably result in correlation-relations between that word and everything that exists. In other words, without communicative intentions in the equation it would turn out that every time we speak, we inevitably speak about everything, but clearly we do not. So communicative intentions, instead of being nebulous things that are in possible conflict with the Principle of Compositionality, are in fact a prerequisite for that very principle. (shrink)
This book argues that languages are composed of sets of ‘signs’, rather than ‘strings’. This notion, first posited by de Saussure in the early 20th century, has for decades been neglected by linguists, particularly following Chomsky’s heavy critiques of the 1950s. Yet since the emergence of formal semantics in the 1970s, the issue of compositionality has gained traction in the theoretical debate, becoming a selling point for linguistic theories. Yet the concept of ‘compositionality’ itself remains ill-defined, an issue (...) this book addresses. Positioning compositionality as a cornerstone in linguistic theory, it argues that, contrary to widely held beliefs, there exist non-compositional languages, which shows that the concept of compositionality has empirical content. The author asserts that the existence of syntactic structure can flow from the fact that a compositional grammar cannot be delivered without prior agreement on the syntactic structure of the constituents. (shrink)
Charles S. Peirce’s pragmatist theory of logic teaches us to take the context of utterances as an indispensable logical notion without which there is no meaning. This is not a spat against compositionality per se , since it is possible to posit extra arguments to the meaning function that composes complex meaning. However, that method would be inappropriate for a realistic notion of the meaning of assertions. To accomplish a realistic notion of meaning (as opposed e.g. to algebraic meaning), (...) Sperber and Wilson’s Relevance Theory (RT) may be applied in the spirit of Peirce’s Pragmatic Maxim (PM): the weighing of information depends on (i) the practical consequences of accommodating the chosen piece of information introduced in communication, and (ii) what will ensue in actually using that piece in further cycles of discourse. Peirce’s unpublished papers suggest a relevance-like approach to meaning. Contextual features influenced his logic of Existential Graphs (EG). Arguments are presented pro and con the view in which EGs endorse non-compositionality of meaning. (shrink)
In this book leading scholars from every relevant field report on all aspects of compositionality, the notion that the meaning of an expression can be derived from its parts. Understanding how compositionality works is a central element of syntactic and semantic analysis and a challenge for models of cognition. It is a key concept in linguistics and philosophy and in the cognitive sciences more generally, and is without question one of the most exciting fields in the study of (...) language and mind. The authors of this book report critically on lines of research in different disciplines, revealing the connections between them and highlighting current problems and opportunities. -/- The force and justification of compositionality have long been contentious. First proposed by Frege as the notion that the meaning of an expression is generally determined by the meaning and syntax of its components, it has since been deployed as a constraint on the relation between theories of syntax and semantics, as a means of analysis, and more recently as underlying the structures of representational systems, such as computer programs and neural architectures. The Oxford Handbook of Compositionality explores these and many other dimensions of this challenging field. It will appeal to researchers and advanced students in linguistics and philosophy and to everyone concerned with the study of language and cognition including those working in neuroscience, computational science, and bio-informatics. (shrink)
The problem of concept representation is relevant for many sub- ﬁelds of cognitive research, including psychology and philosophy, as well as artiﬁcial intelligence. In particular, in recent years it has received a great deal of attention within the ﬁeld of knowledge representation, due to its relevance for both knowledge engineering as well as ontology-based technologies. However, the notion of a concept itself turns out to be highly disputed and problematic. In our opinion, one of the causes of this state of (...) aﬀairs is that the notion of a concept is, to some extent, heterogeneous, and encompasses diﬀerent cognitive phenomena. This results in a strain between conﬂicting requirements, such as compositionality, on the one hand and the need to represent prototypical information on the other. In some ways artiﬁcial intelligence research shows traces of this situation. In this paper, we propose an analysis of this current state of aﬀairs. Since it is our opinion that a mature methodology with which to approach knowledge representation and knowledge engineering should also take advantage of the empirical results of cognitive psychology concerning human abilities, we outline some proposals for concept representation in formal ontologies, which take into account suggestions from psychological research. Our basic assumption is that knowledge representation systems whose design takes into account evidence from experimental psychology (and which, therefore, are more similar to the human way of organizing and processing information) may therefore give better results in many applications (e.g. in the ﬁelds of information retrieval and semantic web). (shrink)
The paper examines an alleged distinction claimed to exist by Van Gelder between two different, but equally acceptable ways of accounting for the systematicity of cognitive output (two “varieties of compositionality”): “concatenative compositionality” vs. “functional compositionality.” The second is supposed to provide an explanation alternative to the Language of Thought Hypothesis. I contend that, if the definition of “concatenative compositionality” is taken in a different way from the official one given by Van Gelder (but one suggested (...) by some of his formulations) then there is indeed a different sort of compositionality; however, the second variety is not an alternative to the language of thought in that case. On the other hand, if the concept of concatenative compositionality is taken in a different way, along the lines of Van Gelder's explicit definition, then there is no reason to think that there is an alternative way of explaining systematicity. (shrink)
When reasons are given for compositionality, the arguments usually purport to establish compositionality in an almost a priori manner. I will rehearse these arguments why one could think that compositionality is a priori true, or almost a priori true, and will find all of them inconclusive. This, in itself, is no reason against compositionality, but a reason to try to establish or defend the principle on other than quasi-a priori grounds.
We introduce several senses of the principle ofcompositionality. We illustrate the difference between them with thehelp of some recent results obtained by Cameron and Hodges oncompositional semantics for languages of imperfect information.
We consider two formalisations of the notion of a compositionalsemantics for a language, and find some equivalent statements in termsof substitutions. We prove a theorem stating necessary and sufficientconditions for the existence of a canonical compositional semanticsextending a given partial semantics, after discussing what features onewould want such an extension to have. The theorem involves someassumptions about semantical categories in the spirit of Husserl andTarski.
I show that the model-theoretic meaning that can be read off the natural deduction rules for disjunction fails to have certain desirable properties. I use this result to argue against a modest form of inferentialism which uses natural deduction rules to fix model-theoretic truth-conditions for logical connectives.
So called dual factor theories are proposed in an attempt to provide an explanation of the meaning of our utterances and the content of our mental states in terms that involve two different theories, each one serving separate concerns. One type of theory deals with the causal explanatory aspect of contentful mental states and/or sentences. The other type deals with those contentful mental states and/or sentences as related to propositions, i.e., as objects that can be assigned referential truth-conditions (Cfr. McGinn, (...) 1982; Block, 1986). I take dual component approaches to meaning to be yet another variety of the Fregean tradition. This may seem odd since Frege -but certainly not two-factor theorists- wants a thoroughly non-psychologized account of meaning. So, an explanation of my statement is needed. Because this story is well known, I will be brief. The sense of a sentence is the thought it expresses, where this thought is understood as something objective and fully independent of our ideas in any psychological sense. Its reference is nothing but a truth-value -true or false- (Cfr. Frege, 1892). The concept of truth that arises out of postulating this kingdom of objective thoughts is, Frege believes, the necessary mediating tool between the symbols and the entities of which the world is made. The very possibility of a proper scientific account of the world can be guaranteed only by appealing to objective scientific laws that, like objective thoughts, are immune to relativistic challenges. But, at the same time, for this objective kingdom of thoughts to play the role of a mediating entity between language and world, it is necessary to introduce an essential, but new idea. The idea is that not only can we grasp those thoughts, but, also in this grasping process, their objective content -their sense- remains the same. (shrink)
The paper draws attention to an important, but apparently neglected distinction relating to axiomatic theories of truth, viz. the distinction between weakly and strongly truth-compositional theories of truth. The paper argues that the distinction might be helpful in classifying weak axiomatic theories of truth and examines some of them with respect to it.
This is the first part of a two-part article on semantic compositionality, that is, the principle that the meaning of a complex expression is determined by the meanings of its parts and the way they are put together. Here we provide a brief historical background, a formal framework for syntax and semantics, precise definitions, and a survey of variants of compositionality. Stronger and weaker forms are distinguished, as well as generalized forms that cover extra-linguistic context dependence as well (...) as linguistic context dependence. In the second article, we survey arguments for and arguments against the claim that natural languages are compositional, and consider some problem cases. It will be referred to as Part II. (shrink)
This is the second part of a two-part article on compositionality, i.e. the principle that the meaning of a complex expression is determined by the meanings of its parts and the way they are put together. In the first, Pagin and Westerståhl (2010), we provide a general historical background, a formal framework, definitions, and a survey of variants of compositionality. It will be referred to as Part I. Here we discuss arguments for and against the claim that natural (...) languages have a compositional semantics. We also discuss some problem cases, including belief reports, quotation, idioms, and ambiguity. (shrink)
Inferentialism, which I am going to present in detail in the following sections, is the view that meanings are, roughly, roles that are acquired by types of sounds and inscriptions in virtue of their being treated according to rules of our language games, roughly in the sense in which wooden pieces acquire certain roles by being treated according the rules of chess. The most important consequences are that (i) a meaning is not an object labeled (stood for, represented ...) by (...) an expression; and that (ii) meaning is normative in the sense that to say that an expression means thus and so is to say that it should be used so and so. The founding father of inferentialism is Brandom (1994; 2000). (However, nothing in this paper hinges on the fact that the version of inferentialism defended here is identical with Brandom's). This position provokes two kinds of objections. First there are general objections towards the very normativity of meaning, which do not target especially inferentialism; these I have addressed elsewhere 1. Besides this, there are objection targeted more specifically at inferentialism. Probably the most discussed specimen of such objections is the objection - repeatedly raised especially by Jerry Fodor and Ernest LePore and others - to the effect that though meanings should be compositional, the compositionality of inferential roles is unattainable. This is the kind of objection I am going to deal with here 2. (Hand in hand with this objection then go various allegations of circularity of inferentialism, which we will also discuss.) To do this, I will exploit the long-standing comparison of language to chess, as it seems particularly helpful for making the inferentialist account of language plausible3. This comparison, to be sure, has its limits beyond which it may become severely misleading; but as long as we keep them in mind, it can serve us very well. (shrink)
Compositionality is the property that the meaning of any complex expression is determined by the meanings of its parts and the way they are put together. The language can be natural or formal, but it has to be interpreted. That is, meanings, or more generally, semantic values of some sort, must be assigned to linguistic expressions, and compositionality concerns the distribution of these values. Even though similar ideas were expressed both in antiquity and in the middle ages (e.g. (...) by Abelard and Buridan), Gottlob Frege is generally taken to be the first person to state explicitly the modern notion of compositionality and to claim that it is an essential feature of human language. (shrink)
Peter Pagin Is the principle of semantic compositionality compatible with the principle of semantic holism? The question is of interest, since both principles have a lot that speaks for them, and since they do seem to be in conflict. The view that natural languages have compositional structure is almost unavoidable, since linguistic communication by means of new combinations of words would be virtually incomprehensible otherwise. And holism too seems generally plausible, since the meaning of an expression is directly connected (...) with the way that expression interacts with other. (shrink)
Compositionality is the idea that the meanings of complex expressions (or concepts) are constructed from the meanings of the less complex expressions (or concepts) that are their constituents.1 Over the last few years, we have just about convinced ourselves that compositionality is the sovereign test for theories of lexical meaning.2 So hard is this test to pass, we think, that it filters out practically all of the theories of lexical meaning that are current in either philosophy or cognitive (...) science. Among the casualties are, for example, the theory that lexical meanings are statistical structures (like stereotypes); the theory that the meaning of a word is its use; the theory that knowing the meaning of (at least some) words requires having a recognitional capacity for (at least some) of the things that it applies to; and the theory that knowing the meaning of a word requires knowing criteria for applying it. Indeed, we think that only two theories of the lexicon survive the compositionality constraint: viz., the theory that all lexical meanings are primitive and the theory that some lexical meanings are primitive and the rest are definitions. So compositionality does a lot of work in lexical semantics, according to our lights. (shrink)
Problems of Compositionality is a revised version of Zolt´an Szab´o’s 1995 doctoral dissertation. Of its five chapters, three have appeared (in heavily modified form) in print independently1, so I will concentrate most of my remarks on the second and third chapters, which remain unpublished outside the book. As it happens, I find these two chapters to be the most philosophically rewarding of the book. The principle of compositionality is a general constraint on the shape of a theory of (...) meaning. Szab´o gives the following initial formulation of the principle: The meaning of a complex expression is determined by the meanings of its constituents and by its structure. (3) Recent discussion of compositionality branches in a number of different directions, including (at least) disputes over the precise formulation of the principle, investigations of the mathematical features of various such formulations, exploration of a plethora of apparent counterexamples to the compositionality of natural languages, scholarly work on the history of the principle (especially its role in Frege), and employment of the principle as a tool in other philosophical disputes. Szab´o’s path through this thicket begins, in the first chapter, with a defense of an idiosyncratic version of the compositionality principle against some more traditional alternatives, proceeds in the second and third chapters to the oft-neglected and philosophically crucial task of asking why the principle of compositionality ought to be one we seek to impose, and concludes in the fourth and fifth chapters by considering and rejecting two putative counterexamples (manifesting in the semantics of adjectives and of definite descriptions) to the principle. The principle of compositionality is most commonly given a functional implementation – a language L is compositional iff the meaning of a complex expression α of L is a function of the meanings of the parts of α and the syntactic structure of α. Equivalently, L is compositional iff synonyms can be intersubstituted salva significa- tio in complex expressions of L.2 Szab´o, however, rejects the functional/substitutional.... (shrink)
This paper contains a discussion of how the concept of compositionality is to be extended from context invariant to context dependent meaning, and of how the compositionality of natural language might conflict with context dependence. Several new distinctions are needed, including a distinction between a weaker (e-) and a stronger (ec-) concept of compositionality for context dependent meaning. The relations between the various notions are investigated. A claim by Jerry Fodor that there is a general conflict between (...) context dependence and compositionality is considered. There is in fact a possible conflict betwee ec-compositionality and context dependence, but not of the kind Fodor suggests. It turns on the presence of so-called unarticulated constituents, in John Perry’s sense. Because of this phenomenon, on some semantic accounts there might be a variation in the meaning of a complex expression between contexts without any corresponding variation in any of the syntactic parts of that complex. The conflict can be resolved in several ways. One way is to make the unarticulated context dependence explicit only in the meta-language, which makes it into an unarticulated constituent account. A recent argument by Jason Stanley against such accounts is discussed. According to Stanley, certain readings of English sentences involving binding of contextual variables, are unavailable in these theories. After considering a reply to Stanley by François Recanati, I present an outline of a fully compositional theory, of the unarticulated constituent variety, which does deliver these readings. Concluding remarks on, inter alia, the semantics/pragmatics distinction. (shrink)
The Principle of Semantic Compositionality (sometimes called Frege''s Principle) is the principle that the meaning of a (syntactically complex) whole is a function only of the meanings of its (syntactic) parts together with the manner in which these parts were combined. This principle has been extremely influential throughout the history of formal semantics; it has had a tremendous impact upon modern linguistics ever since Montague Grammars became known; and it has more recently shown up as a guiding principle for (...) a certain direction in cognitive science.Despite the fact that The Principle is vague or underspecified at a number of points — such as what meaning is, what counts as a part, what counts as a syntactic complex, what counts as combination — this has not stopped some people from viewing The Principle as obviously true, true almost by definition. And it has not stopped other people from viewing The Principle as false, almost pernicious in its effect. And some of these latter theorists think that it is an empirically false principle while others think of it as a methodologically wrong-headed way to proceed. (shrink)
Beginning in the late 1960s, psychologists began to challenge the view the definitional theory of concepts. According to that theory a concept is a mental representation comprising representations of properties (or “features”) that are individually necessary and jointly sufficient for membership in a category. In place of the definitional view, psychologists initially put forward the prototype theory of concept, according to which concepts comprise representations of features that are typical, salient, and diagnostic for category membership, but not necessarily necessary. The (...) prototype theory gained considerable support in the 1970s, but came under attack in the 1980s. One objection, most forcefully advanced by Jerry Fodor, is that prototypes do not combine compositionally. Compositionality is said to be an adequacy condition on a theory of concepts. If prototypes don’t compose, then prototypes are not concepts. Or so the argument goes. (shrink)
The principle of semantic compositionality, as Jerry Fodor and Ernie Lepore have emphasized, imposes constraints on theories of meaning that it is hard to meet with psychological or epistemic accounts. Here, I argue that this general tendency is exemplified in Michael Dummett's account of meaning. On that account, the so-called manifestability requirement has the effect that the speaker who understands a sentence s must be able to tell whether or not s satisfies central semantic conditions. This requirement is not (...) met by truth-conditional accounts of meaning. On Dummett's view, it is met by a proof conditional account: understanding amounts to knowledge of what counts as a proof of a sentence. A speaker is supposed always to be capable of deciding whether or not a given object is a proof of a given sentence she understands. This requirement comes into conflict with compositionality. If meaning is compositionally determined, then all you need for understanding a sentence is what you get from combining your understanding of the parts according to the mode of composition. But that knowledge is not always sufficient for recognizing any proof at all of a given sentence. Dummett's proof-theoretic argument to the contrary is mistaken. (shrink)
Semantic Compositionality is the principle that the meaning of a syntactically complex expression is a function only of the meanings of its syntactic components together with their syntactic mode of combination Various scholars have argued against this Principle in cluding the present author in earlier works One of these arguments was the Argument from Ambiguity which will be of concern in the present article Opposed to the considerations raised against the Principle are certain formal arguments that purport to show (...) that there is no empirical content to the Principle One of these formal ar guments makes use of the notion of free algebras The present article investigates the relationship between these two types of argument.. (shrink)
The paper attempts to clarify some fundamental aspects of an explanationof the concept of truth which is neither deflationary nor substantive.The main aspect examined in detail concerns the ontological dimension of truth, the mind/language-world connection traditionally associated with the concept of truth. It is claimed that it does not make sense to defend or reject a relatedness of truth to the ontological dimension so long as the kind of presupposed or envisaged ontology is not made explicit and critically examined. In (...) particular, it is shown that generally an objectual ontology is – often only implicitly – presupposed, i.e., an ontology admitting objects (substances), properties, relations, sometimes also facts, events, and the like. The paper demonstrates that such an ontology derives from the Principle of Semantic Sentential Compositionality and that this principle should be rejected. It introduces instead the Principle of Semantic Sentential Contextuality (or Context Principle) as the semantic basis of a new ontology, an ontology of primary states of affairs. After sketching such an ontology, it is shown that the relatedness of truth to the ontological dimension becomes intelligible. (shrink)
In The Logical Basis of Metaphysics, Dummett argues at length that Geach has been wrong in taking the sense of a predicate to be a function that sends the sense of a proper name to that of a sentence, and claims that it should instead be a means to determine the referent of the predicate, as is suggested by Frege’s sense-determines-reference (SDR) principle. This disagreement between Dummett and Geach calls for a serious investigation into two of Frege’s sense-related principles, namely (...) the Compositionality thesis and the SDR thesis. By making precise both theses in terms of supervenience, we pin down a preferable sense of compositionality for senses, and resolve the debate in question. (shrink)
This book examines the hypothesis of "direct compositionality", which requires that semantic interpretation proceed in tandem with syntactic combination. Although associated with the dominant view in formal semantics of the 1970s and 1980s, the feasibility of direct compositionality remained unsettled, and more recently the discussion as to whether or not this view can be maintained has receded. The syntax-semantics interaction is now often seen as a process in which the syntax builds representations which, at the abstract level of (...) logical form, are sent for interpretation to the semantics component of the language faculty. In the first extended discussion of the hypothesis of direct compositionality for twenty years, this book considers whether its abandonment might have been premature and whether in fact direct compositionality is not after all a simpler and more effective conception of the grammar than the conventional account of the syntax-semantics interface in generative grammar. It contains contributions from both sides of the debate, locates the debate in the setting of a variety of formal theories, and draws on examples from a range of languages and a range of empirical phenomena. (shrink)
Higginbotham (1986) argues that conditionals embedded under quantifiers (as in ‘no student will succeed if they goof off’) constitute a counterexample to the thesis that natural language is semantically compositional. More recently, Higginbotham (2003) and von Fintel and Iatridou (2002) have suggested that compositionality can be upheld, but only if we assume the validity of the principle of Conditional Excluded Middle. I argue that these authors’ (...) proposals deliver unsatisfactory results for conditionals that, at least intuitively, do not appear to obey Conditional Excluded Middle. Further, there is no natural way to extend their accounts to conditionals containing ‘unless’. I propose instead an account that takes both ‘if’ and ‘unless’ statements to restrict the quantifiers in whose scope they occur, while also contributing a covert modal element to the semantics. In providing this account, I also offer a semantics for unquantified statements containing ‘unless’. (shrink)
In the context of debates about what form a theory of meaning should take, it is sometimes claimed that one cannot understand an intersective modifier-head construction (e.g., ‘pet fish’) without understanding its lexical parts. Neo-Russellians like Fodor and Lepore contend that non-denotationalist theories of meaning, such as prototype theory and theory theory, cannot explain why this is so, because they cannot provide for the ‘reverse compositional’ character of meaning. I argue that reverse compositionality is a red herring in these (...) debates. I begin by setting out some positive arguments for reverse compositionality and showing that they fail. Then I show that the principle of reverse compositionality has two big strikes against it. First, it is incompatible with all theories of meaning on the market, including the denotationalism favored by neo-Russellians. Second, it explains nothing that is not already explained by its venerable predecessor, the principle of (forward) compositionality. (shrink)
There are good reasons to think natural languages are compositional. But compound nominals (CNs) are largely productive constructions that have proven highly recalcitrant to compositional semantic analysis. I evaluate existing proposals to treat CNs compositionally and argue that they are unsuccessful. I then articulate an alternative proposal according to which CNs contain covert indexicals. Features of the context allow a variety of relations to be expressed using CNs, but this variety is not expressed in the lexicon or the semantic rules (...) of the language. This proposal accounts for the diversity of contents CNs can be used to express while preserving compositionality. Finally, I defend this proposal against some recent anti-contextualist arguments. (shrink)
The nature of complex concepts has important implications for the computational modelling of the mind, as well as for the cognitive science of concepts. This paper outlines the way in which RVC â a Relational View of Concepts â accommodates a range of complex concepts, cases which have been argued to be non-compositional. RVC attempts to integrate a number of psychological, linguistic and psycholinguistic considerations with the situation-theoretic view that information-carrying relations hold only relative to background situations. The central tenet (...) of RVC is that the content of concepts varies systematically with perspective. The analysis of complex concepts indicates that compositionality too should be considered to be sensitive to perspective. Such a view accords with concepts and mental states being situated and the implications for theories of concepts and for computational models of the mind are discussed. (shrink)
The problem of concept representation is relevant for many sub-fields of cognitive research, including psychology and philosophy, as well as artificial intelligence. In particular, in recent years it has received a great deal of attention within the field of knowledge representation, due to its relevance for both knowledge engineering as well as ontology-based technologies. However, the notion of a concept itself turns out to be highly disputed and problematic. In our opinion, one of the causes of this state of affairs (...) is that the notion of a concept is, to some extent, heterogeneous, and encompasses different cognitive phenomena. This results in a strain between conflicting requirements, such as compositionality, on the one hand and the need to represent prototypical information on the other. In some ways artificial intelligence research shows traces of this situation. In this paper, we propose an analysis of this current state of affairs. Since it is our opinion that a mature methodology with which to approach knowledge representation and knowledge engineering should also take advantage of the empirical results of cognitive psychology concerning human abilities, we outline some proposals for concept representation in formal ontologies, which take into account suggestions from psychological research. Our basic assumption is that knowledge representation systems whose design takes into account evidence from experimental psychology (and which, therefore, are more similar to the human way of organizing and processing information) may therefore give better results in many applications (e.g. in the fields of information retrieval and semantic web). (shrink)
The paper examines an alleged distinction claimed to exist by Van Gelder between two different, but equally acceptable ways of accounting for the systematicity of cognitive output (two varieties of compositionality): concatenative compositionality vs. functional compositionality. The second is supposed to provide an explanation alternative to the Language of Thought Hypothesis. I contend that, if the definition of concatenative compositionality is taken in a different way from the official one given by Van Gelder (but one suggested (...) by some of his formulations) then there is indeed a different sort of compositionality; however, the second variety is not an alternative to the language of thought in that case. On the other hand, if the concept of concatenative compositionality is taken in a different way, along the lines of Van Gelder's explicit definition, then there is no reason to think that there is an alternative way of explaining systematicity. (shrink)
I address the question whether Dummetts manifestation challenge to semantic realism can be disarmed by reflection on the compositionality of meaning. Building on work of Dummett and Wright, I develop in §§12 what I argue to be the most formidable version of the manifestation challenge. Along the way I review attempts by previous authors to deploy considerations about compositionality in realisms favour, and argue that they are unsuccessful. The formulation of the challenge I develop renders explicit something which (...) I argue to be implicit in Dummetts and Wrights presentations: that the challenge depends on a contention about the constitution of speakers states of declarative sentence understanding: i.e., that many such states incorporate abilities to recognize whether the associated sentences truth conditions are satisfied. In §3 I argue that reflection on the compositionality of meaning reveals, first, that this contention must be rejected by the realist, and second, that it is unmotivated. This result does not settle the debate over the manifestation challenge, but it implies that the onus of argument does not currently rest with the realist. (shrink)
James Higginbotham (1986) presents a theory of semantic interpretation which violates the principle of semantic compositionality. He gives an argument by means of an example construction in favor of his contention. I show that compositioinal theories have more resources than some researchers give it credit for, and that these can be used in two different ways to account for the phenomenon Higginbotham describes.