The meanings of donkey sentences cannot be captured using a procedure which, like Montague’s, uses the existential quantiﬁers of classical logic to translate indeﬁnites and the variables to translate pronouns. The treatment of these examples requires meanings which depend on the context in which sentences appear, and thus necessitates a logic which models this context to some extent. If context is represented as the information conveyed in discourse, and the meanings of pronouns are enriched to depend on this information, the (...) result is the E-Type approach (ETA) adapted by Heim (1990) from proposals in Evans (1980) and Cooper (1979). If the context is represented as a list of potential referents, and the meanings of indeﬁnites are enriched to introduce new referents into this list, the result is a compositional formulation like Groenendijk and Stokhof’s (1990) of the discourse representation theory (DRT) of Kamp (1981) and Heim (1982). Either tack sufﬁces to capture the way in which the referents of he and it systematically correspond to the alternative possibilities described by the antecedent. Disjunction offers a parallel way of introducing alternatives in the antecedent of a conditional, as shown in (2). (shrink)
Originally published in 1972, Should Trees Have Standing? was a rallying point for the then burgeoning environmental movement, launching a worldwide debate on the basic nature of legal rights that reached the U.S. Supreme Court. Now, in the 35th anniversary edition of this remarkably influential book, Christopher D. Stone updates his original thesis and explores the impact his ideas have had on the courts, the academy, and society as a whole. At the heart of the book is an eminently (...) sensible, legally sound, and compelling argument that the environment should be granted legal rights. For the new edition, Stone explores a variety of recent cases and current events--and related topics such as climate change and protecting the oceans--providing a thoughtful survey of the past and an insightful glimpse at the future of the environmental movement. This enduring work continues to serve as the definitive statement as to why trees, oceans, animals, and the environment as a whole should be bestowed with legal rights, so that the voiceless elements in nature are protected for future generations. (shrink)
The commonplace view about metaphorical interpretation is that it can be characterized in traditional semantic and pragmatic terms, thereby assimilating metaphor to other familiar uses of language. We will reject this view, and propose in its place the view that, though metaphors can issue in distinctive cognitive and discourse effects, they do so without issuing in metaphorical meaning and truth, and so, without metaphorical communication. Our inspiration derives from Donald Davidson’s critical arguments against metaphorical meaning and Richard Rorty’s exploration of (...) the diverse uses of language. But unlike these authors we ground our discussion squarely in distinctions about causal mechanisms in cooperative activity developed by H.P. Grice and others. (shrink)
Both formal semantics and cognitive semantics are the source of important insights about language. By developing precise statements of the rules of meaning in fragmentary, abstract languages, formalists have been able to offer perspicuous accounts of how we might come to know such rules and use them to communicate with others. Conversely, by charting the overall landscape of interpretations, cognitivists have documented how closely interpretations draw on the commonsense knowledge that lets us make our way in the world. There is (...) no opposition between these insights. Sooner or later we will have a semantics that responds to both. However, developing such a semantics is profoundly difficult, because there are certain tensions to be overcome in reconciling the two perspectives. For one thing, the overall landscape of meaning does seem to be characterized by a much richer ontology and more dynamic categories than are exhibited by the fragments typically studied in the formal tradition. One sign of strain is the recent tendency to talk of “procedural”, “non-compositional”, or “computational” semantics, as in Hamm, Kamp and van Lambalgen 2006, hereafter HK&vL. We think such locutions can serve as useful reminders to keep semantics fixed on the central question of how language allows us to share information that some have and others need to get. However, there is some danger that formalists will merely by put off by an idea that, taken literally, may not be such a good one. In this short article, we want to explore and defend the traditional realist view attributed by HK&vL to Lewis among others. In fact, this view offers a well-developed, extremely straightforward and robust account of the relation between semantics and cognition. Moreover, while the realist view has ways of accommodating the representationalist insights of DRT (Lewis 1979; Thomason 1990; Stalnaker 1998), it remains unclear how “computational” semantics can account for the key data for the realist view: cases where we judge interlocutors to be ignorant about aspects of meaning in their native language (Kripke 1972; Putnam 1975; Stalnaker 1979; Williamson 1994).. (shrink)
The train of thought I will follow here begins with two facts about Husserl. First, the main and most intractable problems in interpreting him, and the major conflicts between his interpreters, arise from and are fed by the equivocality and unsteady meaning of his terminology. Second, Husserl has a highly developed theory of terminology, beginning with, but by no means limited to, the earliest periods of his thought. This theory of terminology, moreover, focuses on the causes of equivocality and unsteadiness (...) of meaning. These two facts, taken together, suggest why there is something philosophically deep, some deep aspect of the selfknowledge of knowledge, in Husserl’s work, and helps explains why figures of the stature of Heidegger and Carnap took so much interest in him. Nevertheless, I will suggest that Husserl’s theory of terminology is inadequate to his problems: that the self-knowledge of philosophy is here (as always) incomplete. This helps explain what both Heidegger and Carnap reject in Husserl, and therefore, in turn, why the question, how to give to or recover for terminology an unequivocal and fixed meaning, becomes crucial for both of them. Both of them, in fact, approach this question in a way which is essentially a modification—albeit a root and branch modification—of Husserl’s approach. This paper, despite it’s title, will be devoted mostly to setting up the problem in Husserl. At the end I will then briefly describe Heidegger’s and Carnap’s contrasting solutions. (shrink)
When we wish to frame or to communicate a precise and nuanced argument, we should first clarify whatever meaningful distinctions our reasoning exploits. That’s why every good paper begins by defining its terms. A tiger is a large and ferocious predatory cat, yellow with black stripes. A bachelor is an unmarried man. Freedom is the capacity to choose one’s actions for oneself, independent of causal forces in the outside world. Knowledge is justified true belief. Getting clear on our concepts is (...) the process of analysis. It is such a fundamental part of philosophical practice that the preponderance of contemporary philosophical writing in English today is described as ‘analytic’. (shrink)
When classical mechanics is seen as the short-wavelength limit of quantum mechanics (i.e., as the limit of geometrical optics), it becomes clear just how serious and all-pervasive the measurement problem is. This formulation also leads us into the Bohm theory. But this theory has drawbacks: its nonuniqueness, in particular, and its nonlocality. I argue that these both reflect an underlying problem concerning information, which is actually a deeper version of the measurement problem itself.
This paper develops a general approach to contextual reasoning in natural language processing. Drawing on the view of natural language interpretation as abduction (Hobbs et al., 1993), we propose that interpretation provides an explanation of how an utterance creates a new discourse context in which its interpreted content is both true and promi- nent. Our framework uses dynamic theories of semantics and pragmatics, formal theories of context, and models of attentional state. We describe and illustrate a Prolog implementation.
An essential ingredient of language use is our ability to reason about utterances as intentional actions. Linguistic representations are the natural substrate for such reasoning, and models from computational semantics can often be seen as providing an infrastructure to carry out such inferences from rich and accurate grammatical descriptions. Exploring such inferences offers a productive pragmatic perspective on problems of interpretation, and promises to leverage semantic representations in more flexible and more general tools that compute with meaning.
We use the interpretation of vague scalar predicates like small as an illustration of how systematic semantic models of dialogue context enable the derivation of useful, ﬁne-grained utterance interpretations from radically underspeci- ﬁed semantic forms. Because dialogue context sufﬁces to determine salient alternative scales and relevant distinctions along these scales, we can infer implicit standards of comparison for vague scalar predicates through completely general pragmatics, yet closely constrain the intended meaning to within a natural range.
The mid-twentieth century saw the introduction of a new general model of processes, COMPUTATION, with the work of scientists such as Turing, Chomsky, Newell and Simon.1 This model so revolutionized the intellectual world that the dominant scientific programs of the day—spearheaded by such eminent scientists as Hilbert, Bloomfield and Skinner—are today remembered as much for the way computation exposed their stark limitations as for their positive contributions.2 Ever since, the field of Artificial Intelligence (AI) has defined itself as the subfield (...) of computer science dedicated to the understanding of intelligent entities as computational processes. Now, drawing on fifty years of results of increasing breadth and applicability, we can also characterize AI research as a concrete practice: an ENGINEER-. (shrink)
Environmental ethics has reached a certain level of maturity; further significant advances require reexamining its status within the larger realm of moral philosophy. It could aim to extend to nonhumans one of the familiar sets of principles subject to appropriate modifications; or it could seek to break away and put forward its own paradigm or paradigms. Selecting the proper course requires as the most immediate mission exploring the formal requirements of an ethical system. In general, are there constraints against bringing (...) our moral relations with different sorts of things under different mIes of govemance? In particular, how much independence can an environmental ethic (or ethics) aim to have? (shrink)
Ancient Peripatetics and Neoplatonists had great difficulty coming up with a consistent, interpretatively reasonable, and empirically adequate Aristotelian theory of complete mixture or complexion. I explain some of the main problems, with special attention to authors with whom Avicenna was familiar. I then show how Avicenna used a new doctrine of the occultness of substantial form (whose roots are found in Alfarabi) to address these problems. The result was in some respects an improvement, but it also gave rise to a (...) new set of problems, which were later to prove fateful in the history of early modern philosophy. (shrink)
Utterances in situated activity are about the world. Theories and systems normally capture this by assuming references must be resolved to real-world entities in utterance understanding. We describe a number of puzzles and problems for this approach, and propose an alternative semantic representation using discourse relations that link utterances to the nonlinguistic context to capture the context-dependent interpretation of situated utterances. Our approach promises better empirical coverage and more straightforward system building. Substantiating these advantages is work in progress.
Traditional attempts to delineate the distinctive rationality of modern science have taken it for granted that the purpose of empirical research is to test judgments. The choice of concepts to use in those judgments is therefore seen either a matter of indifference (Popper) or as important choice which must be made, so to speak, in advance of all empirical research (Carnap). I argue that scientific method aims precisely at empirical testing of concepts, and that even the simplest scientific ex- periment (...) or observation results in conceptual change. (shrink)
This paper gives a new, proof-theoretic explanation of partial-order reasoning about time in a nonmonotonic theory of action. The explanation relies on the technique of lifting ground proof systems to compute results using variables and unification. The ground theory uses argumentation in modal logic for sound and complete reasoning about specifications whose semantics follows Gelfond and Lifschitz’s language . The proof theory of modal logic A represents inertia by rules that can be instantiated by sequences of time steps or events. (...) Lifting such rules introduces string variables and associates each proof with a set of string equations; these equations are equivalent to a set of partial-order tree-constraints that can be solved efficiently. The defeasible occlusion of inertia likewise imposes partial-order constraints in the lifted system. By deriving an auxiliary partial order representation of action from the underlying logic, not the input formulas or proofs found, this paper strengthens the connection between practical planners and formal theories of action. Moreover, the general correctness of the theory of action justifies partial-order representations not only for forward reasoning from a completely specified start state, but also for explanatory reasoning and for reasoning by cases. (shrink)
We relate the theory of presupposition accommodation to a computational framework for reasoning in conversation. We understand presuppositions as private commitments the speaker makes in using an utterance but expects the listener to recognize based on mutual information. On this understanding, the conversation can move forward not just through the positive effects of interlocutors’ utterances but also from the retrospective insight interlocutors gain about one anothers’ mental states from observing what they do. Our title, ENLIGHTENED UPDATE, highlights such cases. Our (...) approach fleshes out two key principles: that interpretation is a form of intention recognition; and that intentions are complex informational structures, which specify commitments to conditions and to outcomes as well as to actions. We present a formalization and implementation of these principles for a simple conversational agent, and draw on this case study to argue that pragmatic reasoning is holistic in character, continuous with common-sense reasoning about collaborative activities, and most effectively characterized by associating specific, reliable interpretive constraints directly with grammatical forms. In showing how to make such claims precise and to develop theories that respect them, we illustrate the general place of computation in the cognitive science of language. (shrink)
In modal subordination, a modal sentence is interpreted relative to a hypothetical scenario introduced in an earlier sentence. In this paper, I argue that this phenomenon reﬂects the fact that the interpretation of modals is an ANAPHORIC process. Modal morphemes introduce sets of possible worlds, representing alternative hypothetical scenarios, as entities into the discourse model. Their interpretation depends on evoking sets of worlds recording described and reference scenarios, and relating such sets to one another using familiar notions of restricted, preferential (...) quantiﬁcation. This proposal relies on an extended model of environments in dynamic semantics to keep track of associations between possible worlds and ordinary individuals; it assumes that modal meanings and other lexical meanings encapsulate quantiﬁcation over possible worlds. These two innovations are required in order for modals to refer to sets of possible worlds directly as static objects in place of the inherently dynamic objects—quite different from the referents of pronouns and tenses—used in previous accounts. The simpler proposal that results offers better empirical coverage and suggests a new parallel between modal and temporal interpretation. (shrink)
In abductive planning, plans are constructed as reasons for an agent to act: plans are demonstrations in logical theory of action that a goal will result assuming that given actions occur successfully. This paper shows how to construct plans abductively for an agent that can sense the world to augment its partial information. We use a formalism that explicitly refers not only to time but also to the information on which the agent deliberates. Goals are reformulated to represent the successive (...) stages of deliberation and action the agent follows in carrying out a course of action, while constraints on assumed actions ensure that an agent at each step performs a specific action selected for its known effects. The result is a simple formalism that can directly inform extensions to implemented planners. (shrink)
Interdisciplinary investigations marry the methods and concerns of different fields. Computer science is the study of precise descriptions of finite processes; semantics is the study of meaning in language. Thus, computational semantics embraces any project that approaches the phenomenon of meaning by way of tasks that can be performed by following definite sets of mechanical instructions. So understood, computational semantics revels in applying semantics, by creating intelligent devices whose broader behavior fits the meanings of utterances, and not just their form. (...) IBM’s Watson (Ferrucci, Brown, Chu-Carroll, Fan, Gondek, Kalyanpur, Lally, Murdock, Nyberg, Prager, Schlaefer & Welty 2010) is a harbinger of the excitement and potential of this technology. (shrink)
The cognitive hierarchy model is an approach to decision making in multi-agent interactions motivated by laboratory studies of people. It bases decisions on empirical assumptions about agents’ likely play and agents’ limited abilities to second-guess their opponents. It is attractive as a model of human reasoning in economic settings, and has proved successful in designing agents that perform effectively in interactions not only with similar strategies but also with sophisticated agents, with simpler computer programs, and with people. In this paper, (...) we explore the qualitative structure of iterating best response solutions in two repeated games, one without messages and the other including communication in the form of non-binding promises and threats. Once the model anticipates interacting with sufficiently sophisticated agents with a sufficiently high probability, reasoning leads to policies that disclose intentions truthfully, and expect credibility from the agents they interact with, even as those policies act aggressively to discover and exploit other agents’ weaknesses and idiosyncrasies. Non-binding communication improves overall agent performance in our experiments. (shrink)
We translate sentence generation from TAG grammars with semantic and pragmatic information into a planning problem by encoding the contribution of each word declaratively and explicitly. This allows us to exploit the performance of off-the-shelf planners. It also opens up new perspectives on referring expression generation and the relationship between language and action.
This chapter investigates the computational consequences of a broadly Gricean view of language use as intentional activity. In this view, dialogue rests on coordinated reasoning about communicative intentions. The speaker produces each utterance by formulating a suitable communicative intention. The hearer understands it by recognizing the communicative intention behind it. When this coordination is successful, interlocutors succeed in considering the same intentions— that is, the same representations of utterance meaning—as the dialogue proceeds. In this paper, I emphasize that these intentions (...) can be formalized; we can provide abstract but systematic representations that spell out what a speaker is trying to do with an utterance. Such representations describe utterances simultaneously as the product of our knowledge of grammar and as actions chosen for a reason. In particular, they must characterize the speaker’s utterance in grammatical terms, provide the links to the context that the grammar requires, and so arrive at a contribution that the speaker aims to achieve. Because I have implemented this formalism, we can regard it as a possible analysis of conversational processes at the level of computational theory. Nevertheless, this analysis leaves open what the nature of the biological computation involved in inference to intentions is, and what regularities in language use support this computation. (shrink)
My aim in this paper is to explain how universal statements, as they occur in scientiﬁc theories, are actually tested by observational evidence, and to draw certain conclusions, on that basis, about the way in which scientiﬁc theories are tested in general. 1 But I am pursuing that aim, ambitious enough in and of itself, in the service of even more ambitious projects, and in the ﬁrst place: (a) to say what is distinctive about modern science, and especially modern physical (...) science, as a human intellectual activity; and (b) to show how this distinctiveness explains the unique status of modern science in human intellectual life. So I will begin by saying a few words about that larger project. One might doubt, ﬁrst, whether that project is legitimate. Although everything is different from everything else, the question “What is distinctive about X?” is not necessarily well put, because X may not be anything—that is, anything distinct. And there are indeed many philosophers, otherwise of the most diverse intellectual backgrounds and tendencies, who would deny, in various ways, that modern science is a distinct thing, or that its status is unique. It seems to me that they deny something terrifyingly obvious— a fact which confronts us far more urgently than the fact that, say, ravens are black. I will put off further remarks about this until the end of the paper, however, because the discussion of my more limited present aim will focus precisely on the ways to tell when a question is well put, and whether there is such a (distinct) thing as X. That one’s methodological problems are also the subject of one’s investigation is a sign that that investigation is philosophical, although (or because) also a threat to its coherence. Second, some points about methodology. I am not a sociologist, or even a historian, nor (unlike some recent philosophers of science) will I pretend to be.. (shrink)
We use a dynamic, context-sensitive approach to abductive interpretation to describe coordinated processes of understanding, generation and accommodation in dialogue. The agent updates the dialogue uniformly for its own and its interlocutors’ utterances, by accommodating a new context, inferred abductively, in which utterance content is both true and prominent. The generator plans natural and comprehensible utterances by exploiting the same abductive preferences used in understanding. We illustrate our approach by formalizing and implementing some interactions between information structure and the form (...) of referring expressions. (shrink)
The following list contains a survey of some important and recent research in modeling face-to-face conversation. The list below is a presented as a guide to the literature by topic and date; we include complete citations afterwards in alphabetical order. For brevity, research works are keyed by ﬁrst author and date only (we use these keys on the slides as well as in this list). Of course, most papers are multiply authored. The list is not intended to be exhaustive. Our (...) primary aim is simply to provide bibliographic information for all the research that we will refer to during the ESSLLI class itself. The entries also provide a sampling from ongoing research projects so that you can get an overall sense of the state of the ﬁeld and begin to follow up topics of particular interest to you. (shrink)
We present a formal analysis of iconic coverbal gesture. Our model describes the incomplete meaning of gesture that’s derivable from its form, and the pragmatic reasoning that yields a more specific interpretation. Our formalism builds reported.
Three experiments examined contributions of study phase awareness of word identity to subsequent word-identification priming by manipulating visual attention to words at study. In Experiment 1, word-identification priming was reduced for ignored relative to attended words, even though ignored words were identified sufficiently to produce negative priming in the study phase. Word-identification priming was also reduced after color naming relative to emotional valence rating (Experiment 2) or word reading (Experiment 3), even though an effect of emotional valence upon color naming (...) (Experiment 2) indicated that words were identified at study. Thus, word-identification priming was reduced even when word identification occurred at study. Word-identification priming may depend on awareness of word identity at the time of study. (shrink)
cal practice: the enterprise of specifying information about the world for use in computer systems. Knowledge representation as a ﬁeld also encompasses conceptual results that call practitioners’ attention to important truths about the world, mathematical results that allow practitioners to make these truths precise, and computational results that put these truths to work. This chapter surveys this practice and its results, as it applies to the interpretation of natural language utterances in implemented natural language processing systems. For a broader perspective (...) on such technical practice, in all its strengths and weaknesses, see (Agre 1997). Knowledge representation offers a powerful general tool for the science of language. Computational logic, a prototypical formalism for representing knowledge about the world, is also the model for the level of logical form that linguists use to characterize the grammar of meaning (Larson and Segal 1995). And researchers from (Schank and Abelson 1977) to (Shieber 1993) and (Bos to appear) have relied crucially on such representations, and the inference methods associated with them, in articulating accounts of semantic relations in language, such as synonymy, entailment, informativeness and contradiction. The new textbooks (Blackburn and Bos 2002a, Blackburn and Bos 2002b) provide an excellent grounding in this research, and demonstrate how deeply computational ideas from knowledge representation can inform pure linguistic study. In this short chapter, I must leave much of.. (shrink)
Algorithms for NLG NLG is typically broken down into stages of discourse planning (to select information and organize it into coherent paragraphs), sentence planning (to choose words and structures to fit information into sentence-sized units), and realization (to determine surface form of output, including word order, morphology and final formatting or intonation). The SPUD system combines the generation steps of sentence planning and surface realization by using a lexicalized grammar to construct the syntax and semantics of a sentence simultaneously.
We study prefixed tableaux for first-order multi-modal logic, providing proofs for soundness and completeness theorems, a Herbrand theorem on deductions describing the use of Herbrand or Skolem terms in place of parameters in proofs, and a lifting theorem describing the use of variables and constraints to describe instantiation. The general development applies uniformly across a range of regimes for defining modal operators and relating them to one another; we also consider certain simplifications that are possible with restricted modal theories and (...) fragments. (shrink)
This paper pursues a formal analogy between natural language dialogue and collaborative real-world action in general. The analogy depends on an analysis of two aspects of collaboration that ﬁgure crucially in language use. First, agents must be able to coordinate abstractly about future decisions which cannot be made on present information. Second, when agents ﬁnally take such decisions, they must again coordinate in order to interpret one anothers’ actions as collaborative. The contribution of this paper is a general representation of (...) collaborative plans and intentions, inspired by representations of deductions in logics of knowledge, action and time, which supports these two kinds of coordination. Such representations.. (shrink)
This paper argues for teaching computer science to linguists through a general course at the introductory graduate level whose goal is to prepare students of all backgrounds for collaborative computational research, especially in the sciences. We describe our work over the past three years in creating a model course in the area, called Computational Thinking. What makes this course distinctive is its combined emphasis on the formulation and solution of computational problems, strategies for interdisciplinary communication, and critical thinking about computational (...) explanations. (shrink)
We describe a methodology for learning a disambiguation model for deep pragmatic interpretations in the context of situated task-oriented dialogue. The system accumulates training examples for ambiguity resolution by tracking the fates of alternative interpretations across dialogue, including subsequent clariﬁcatory episodes initiated by the system itself. We illustrate with a case study building maximum entropy models over abductive interpretations in a referential communication task. The resulting model correctly resolves 81% of ambiguities left unresolved by an initial handcrafted baseline. A key (...) innovation is that our method draws exclusively on a system’s own skills and experience and requires no human annotation. (shrink)
Here I establish a parallel between modern epistemology and traditional metaphysics: between the way we know an object, on the one hand, and the way an object's causes cause it to exist, on the other. I show that different efficient causes in the Thomistic system correspond to different questions of knowledge, as analyzed by Stanley Cavell, and that in particular the question the Cavellian skeptic asks corresponds to God's causation in creation. As I have explained in detail elsewhere, and discuss (...) briefly here, this parallel represents far more than a formal analogy between a series of issues in epistemology and a series of issues in metaphysics. It helps to explain, in fact, why modern philosophers (e.g., Husserl) were ultimately driven to put the human ego in the place of God, as creating (or "positing") the objects of its knowledge, thereby denying the very distinction between epistemology and ontology. (shrink)
Internal friction and dynamic shear modulus in an indium?21?at.% thallium alloy were measured as functions of frequency and cooling rate using broadband viscoelastic spectroscopy during the martensitic transformation which occurs in this material occurs around 50°C. Microstructural evolution of martensitic bands was captured using time-lapse optical microscopy. The amplitude of damping peaks due to the temperature-induced transformation in the polycrystalline alloy was found to exceed those reported by others for single crystals of similar alloy compositions, in contrast to the usual (...) reduction in damping in polycrystals. The high temperature portion of the damping peak occurs before martensitic bands are observed; therefore this portion cannot be due to interfacial motion. Constrained negative stiffness of the grains can account for this damping, as well as for amplification of internal friction peaks in these polycrystals and for sigmoid-shaped anomalies in the shear modulus at high cooling rates. Surface features associated with a previously unreported pre-martensitic phenomenon are seen at temperatures above martensite-start. (shrink)
We cannot explain our diverse practices for engaging with imagery through general pragmatic mechanisms. There is no general mechanism behind practices like metaphor and irony. Metaphor works the way it works; irony works the way it works.
This Discussion Meeting Issue of the Philosophical Transactions A had its genesis in a Discussion Meeting of the Royal Society which took place on 10–11 October 2011. The Discussion Meeting, entitled ‘Warm climates of the past: a lesson for the future?’, brought together 16 eminent international speakers from the field of palaeoclimate, and was attended by over 280 scientists and members of the public. Many of the speakers have contributed to the papers compiled in this Discussion Meeting Issue. The papers (...) summarize the talks at the meeting, and present further or related work. This Discussion Meeting Issue asks to what extent information gleaned from the study of past climates can aid our understanding of future climate change. Climate change is currently an issue at the forefront of environmental science, and also has important sociological and political implications. Most future predictions are carried out by complex numerical models; however, these models cannot be rigorously tested for scenarios outside of the modern, without making use of past climate data. Furthermore, past climate data can inform our understanding of how the Earth system operates, and can provide important contextual information related to environmental change. All past time periods can be useful in this context; here, we focus on past climates that were warmer than the modern climate, as these are likely to be the most similar to the future. This introductory paper is not meant as a comprehensive overview of all work in this field. Instead, it gives an introduction to the important issues therein, using the papers in this Discussion Meeting Issue, and other works from all the Discussion Meeting speakers, as exemplars of the various ways in which past climates can inform projections of future climate. Furthermore, we present new work that uses a palaeo constraint to quantitatively inform projections of future equilibrium ice sheet change. (shrink)
A Community Climate System Model, Version 3 (CCSM3) simulation for 125 ka during the Last Interglacial (LIG) is compared to two recent proxy reconstructions to evaluate surface temperature changes from modern times. The dominant forcing change from modern, the orbital forcing, modified the incoming solar insolation at the top of the atmosphere, resulting in large positive anomalies in boreal summer. Greenhouse gas concentrations are similar to those of the pre-industrial (PI) Holocene. CCSM3 simulates an enhanced seasonal cycle over the Northern (...) Hemisphere continents with warming most developed during boreal summer. In addition, year-round warming over the North Atlantic is associated with a seasonal memory of sea ice retreat in CCSM3, which extends the effects of positive summer insolation anomalies on the high-latitude oceans to winter months. The simulated Arctic terrestrial annual warming, though, is much less than the observational evidence, suggesting either missing feedbacks in the simulation and/or interpretation of the proxies. Over Antarctica, CCSM3 cannot reproduce the large LIG warming recorded by the Antarctic ice cores, even with simulations designed to consider observed evidence of early LIG warmth in Southern Ocean and Antarctica records and the possible disintegration of the West Antarctic Ice Sheet. Comparisons with a HadCM3 simulation indicate that sea ice is important for understanding model polar responses. Overall, the models simulate little global annual surface temperature change, while the proxy reconstructions suggest a global annual warming at LIG (as compared to the PI Holocene) of approximately 1°C, though with possible spatial sampling biases. The CCSM3 SRES B1 (low scenario) future projections suggest high-latitude warmth similar to that reconstructed for the LIG may be exceeded before the end of this century. (shrink)