Fundamental to spatial knowledge in all species are the representations underlying object recognition, object search, and navigation through space. But what sets humans apart from other species is our ability to express spatial experience through language. This target article explores the language ofobjectsandplaces, asking what geometric properties are preserved in the representations underlying object nouns and spatial prepositions in English. Evidence from these two aspects of language suggests there are significant differences in the geometric richness with which objects and places (...) are encoded. When an object is named, detailed geometric properties – principally the object's shape – are represented. In contrast, when an object plays the role of either “figure” or “ground” in a locational expression, only very coarse geometric object properties are represented, primarily the main axes. In addition, the spatial functions encoded by spatial prepositions tend to be nonmetric and relatively coarse, for example, “containment,” “contact,” “relative distance,” and “relative direction.” These properties are representative of other languages as well. The striking differences in the way language encodes objects versus places lead us to suggest two explanations: First, there is a tendency for languages to level out geometric detail from both object and place representations. Second, a nonlinguistic disparity between the representations of “what” and “where” underlies how language represents objects and places. The language of objects and places converges with and enriches our understanding of corresponding spatial representations. (shrink)
When people describe motion events, their path expressions are biased toward inclusion of goal paths (e.g., into the house) and omission of source paths (e.g., out of the house). In this paper, we explored whether this asymmetry has its origins in people’s non-linguistic representations of events. In three experiments, 4-year-old children and adults described or remembered manner of motion events that represented animate/intentional and physical events. The results suggest that the linguistic asymmetry between goals and sources is not fully rooted (...) in non-linguistic event representations: linguistic descriptions showed the goal bias for both kinds of events, whereas non-linguistic memory for events showed the goal bias only for events involving animate, goal-directed motion. The findings are discussed in terms of the mapping between non-linguistic representations of goals and sources in language, focusing on the role that linguistic principles play in producing a more absolute goal bias from more gradient non-linguistic representations of paths. (shrink)
Containment and support have traditionally been assumed to represent universal conceptual foundations for spatial terms. This assumption can be challenged, however: English in and on are applied across a surprisingly broad range of exemplars, and comparable terms in other languages show significant variation in their application. We propose that the broad domains of both containment and support have internal structure that reflects different subtypes, that this structure is reflected in basic spatial term usage across languages, and that it constrains children's (...) spatial term learning. Using a newly developed battery, we asked how adults and 4-year-old children speaking English or Greek distribute basic spatial terms across subtypes of containment and support. We found that containment showed similar distributions of basic terms across subtypes among all groups while support showed such similarity only among adults, with striking differences between children learning English versus Greek. We conclude that the two domains differ considerably in the learning problems they present, and that learning in and on is remarkably complex. Together, our results point to the need for a more nuanced view of spatial term learning. (shrink)
In this article, I revisit Landau and Jackendoff's () paper, “What and where in spatial language and spatial cognition,” proposing a friendly amendment and reformulation. The original paper emphasized the distinct geometries that are engaged when objects are represented as members of object kinds, versus when they are represented as figure and ground in spatial expressions. We provided empirical and theoretical arguments for the link between these distinct representations in spatial language and their accompanying nonlinguistic neural representations, emphasizing the “what” (...) and “where” systems of the visual system. In the present paper, I propose a second division of labor between two classes of spatial prepositions in English that appear to be quite distinct. One class includes prepositions such as in and on, whose core meanings engage force-dynamic, functional relationships between objects, with geometry only a marginal player. The second class includes prepositions such as above/below and right/left, whose core meanings engage geometry, with force-dynamic relationships a passing or irrelevant variable. The insight that objects’ force-dynamic relationships matter to spatial terms’ uses is not new; but thinking of these terms as a distinct set within spatial language has theoretical and empirical consequences that are new. I propose three such consequences, rooted in the fact that geometric knowledge is highly constrained and early-emerging in life, while force-dynamic knowledge of objects and their interactions is relatively unconstrained and needs to be learned piecemeal over a lengthy timeline. First, the two classes will engage different learning problems, with different developmental trajectories for both first and second language learners; second, the classes will naturally lead to different degrees of cross-linguistic variation; and third, they may be rooted in different neural representations. (shrink)
Language is a collaborative act: To communicate successfully, speakers must generate utterances that are not only semantically valid but also sensitive to the knowledge state of the listener. Such sensitivity could reflect the use of an “embedded listener model,” where speakers choose utterances on the basis of an internal model of the listener's conceptual and linguistic knowledge. In this study, we ask whether parents’ spatial descriptions incorporate an embedded listener model that reflects their children's understanding of spatial relations and spatial (...) terms. Adults described the positions of targets in spatial arrays to their children or to the adult experimenter. Arrays were designed so that targets could not be identified unless spatial relationships within the array were encoded and described. Parents of 3–4-year-old children encoded relationships in ways that were well-matched to their children's level of spatial language. These encodings differed from those of the same relationships in speech to the adult experimenter. In contrast, parents of individuals with severe spatial impairments did not show clear evidence of sensitivity to their children's level of spatial language. The results provide evidence for an embedded listener model in the domain of spatial language and indicate conditions under which the ability to model listener knowledge may be more challenging. (shrink)
Barsalou is right in arguing that perception has been unduly neglected in theories of concept formation. However, the theory he proposes is a weaker version of the classical empirical hypothesis about the relationship between sensation, perception, and concepts. It is weaker because it provides no principled basis for choosing the elementary components of perception. Furthermore, the proposed mechanism of concept formation, growth and development – simulation – is essentially equivalent to the notion of a concept, frame, or theory, and therefore (...) inherits all the well-known problems inherent in these constructs. The theory of simulation does not provide a clearly better alternative to existing notions. (shrink)
Converging psychophysical evidence suggests that the human visual system parses shapes into component parts for the purposes of object recognition. We examine the Schyns et al. claim of “creation” of features in light of recent work on part-based representations of visual shape, particularly the perceptual rules that human vision uses to parse shapes.