The now-classic Metaphors We Live By changed our understanding of metaphor and its role in language and the mind. Metaphor, the authors explain, is a fundamental mechanism of mind, one that allows us to use what we know about our physical and social experience to provide understanding of countless other subjects. Because such metaphors structure our most basic understandings of our experience, they are "metaphors we live by"--metaphors that can shape our perceptions and actions without our ever noticing them. In (...) this updated edition of Lakoff and Johnson's influential book, the authors supply an afterword surveying how their theory of metaphor has developed within the cognitive sciences to become central to the contemporary understanding of how we think and how we express our thoughts in language. (shrink)
Concepts are the elementary units of reason and linguistic meaning. They are conventional and relatively stable. As such, they must somehow be the result of neural activity in the brain. The questions are: Where? and How? A common philosophical position is that all concepts—even concepts about action and perception—are symbolic and abstract, and therefore must be implemented outside the brain’s sensory-motor system. We will argue against this position using (1) neuroscientific evidence; (2) results from neural computation; and (3) results about (...) the nature of concepts from cognitive linguistics. We will propose that the sensory-motor system has the right kind of structure to characterise both sensory-motor and more abstract concepts. Central to this picture are the neural theory of language and the theory of cogs, according to which, brain structures in the sensory-motor regions are exploited to characterise the so-called “abstract” concepts that constitute the meanings of grammatical constructions and general inference patterns. (shrink)
From the late 1950s until 1975, cognition was understood mainly as disembodied symbol manipulation in cognitive psychology, linguistics, artificial intelligence, and the nascent field of Cognitive Science. The idea of embodied cognition entered the field of Cognitive Linguistics at its beginning in 1975. Since then, cognitive linguists, working with neuroscientists, computer scientists, and experimental psychologists, have been developing a neural theory of thought and language (NTTL). Central to NTTL are the following ideas: (a) we think with our brains, that is, (...) thought is physical and is carried out by functional neural circuitry; (b) what makes thought meaningful are the ways those neural circuits are connected to the body and characterize embodied experience; (c) so-called abstract ideas are embodied in this way as well, as is language. Experimental results in embodied cognition are seen not only as confirming NTTL but also explained via NTTL, mostly via the neural theory of conceptual metaphor. Left behind more than three decades ago is the old idea that cognition uses the abstract manipulation of disembodied symbols that are meaningless in themselves but that somehow constitute internal “representations of external reality” without serious mediation by the body and brain. This article uniquely explains the connections between embodied cognition results since that time and results from cognitive linguistics, experimental psychology, computational modeling, and neuroscience. (shrink)
_Moral Politics_ takes a fresh look at how we think and talk about political and moral ideas. George Lakoff analyzed recent political discussion to find that the family—especially the ideal family—is the most powerful metaphor in politics today. Revealing how family-based moral values determine views on diverse issues as crime, gun control, taxation, social programs, and the environment, George Lakoff looks at how conservatives and liberals link morality to politics through the concept of family and how these ideals diverge. Arguing (...) that conservatives have exploited the connection between morality, the family, and politics, while liberals have failed to recognized it, Lakoff explains why conservative moral position has not been effectively challenged. A wake up call to political pundits on both the left and the right, this work redefines how Americans think and talk about politics. (shrink)
Evidence is presented to show that the role of a generative grammar of a natural language is not merely to generate the grammatical sentences of that language, but also to relate them to their logical forms. The notion of logical form is to be made sense of in terms a natural logic, a logical for natural language, whose goals are to express all concepts capable of being expressed in natural language, to characterize all the valid inferences that can be made (...) in natural language, and to mesh with adequate linguistic descriptions of all natural languages. The latter requirement imposes empirical linguistic constraints on natural logic. A number of examples are discussed. (shrink)
Rips et al. appear to discuss, and then dismiss with counterexamples, the brain-based theory of mathematical cognition given in Lakoff and Nez (2000). Instead, they present another theory of their own that they correctly dismiss. Our theory is based on neural learning. Rips et al. misrepresent our theory as being directly about real-world experience and mappings directly from that experience.
A natural language is a unified and integrated system, and the serious study of one part of the system inevitably involves one in the study of many other parts, if not the system as a whole. For this reason, the study of small, isolated fragments of a language—however necessary, valuable, and difficult this may be—will often make us think that we understand more than we really do. The fact is that you can’t really study one phenomenon adequately without studying a (...) great many other related phenomena, and the way they fit together in terms of the linguistic system as a whole. This is the sort of thing a linguist learns very early in his career. Experience in descriptive linguistics, even at an elementary level will force a linguist to come to grips with a wide range of complex data in some language, perhaps even English, and the truism soon emerges. But, due to the vagaries of our educational institutions, few philosophers or logicians receive training in linguistic description. Consequently much of the discussion of natural language in the philosophical and logical literature is based on a very small sampling of data which is skewed in nontrivial ways. True, one has to start somewhere, and a great deal has been learned by ordinary language philosophers who have looked at only a handful of relatively simple examples and by logicians who have studied what by natural language standards are only miniscule fragments. But now that philosophers and logicians are turning to more detailed studies of natural language phenomena, it is perhaps the right time to suggest that philosophical and logical training be expanded to include the study of natural languages as entire systems. I don’t mean to suggest, for example, that logicians should stop their systematic study of small fragments, but rather that a knowledge of the kinds of phenomena outside of those fragments can enrich the study of fragments and give one a more realistic picture of what one does and does not know about natural language. (shrink)