Although the language we encounter is typically embedded in rich discourse contexts, many existing models of processing focus largely on phenomena that occur sentence-internally. Similarly, most work on children's language learning does not consider how information can accumulate as a discourse progresses. Research in pragmatics, however, points to ways in which each subsequent utterance provides new opportunities for listeners to infer speaker meaning. Such inferences allow the listener to build up a representation of the speakers' intended topic and more generally (...) to identify relationships, structures, and messages that extend across multiple utterances. We address this issue by analyzing a video corpus of child–caregiver interactions. We use topic continuity as an index of discourse structure, examining how caregivers introduce and discuss objects across utterances. For the analysis, utterances are grouped into topical discourse sequences using three annotation strategies: raw annotations of speakers' referents, the output of a model that groups utterances based on those annotations, and the judgments of human coders. We analyze how the lexical, syntactic, and social properties of caregiver–child interaction change over the course of a sequence of topically related utterances. Our findings suggest that many cues used to signal topicality in adult discourse are also available in child-directed speech. (shrink)
Machines that learn and think like people must be able to learn from others. Social learning speeds up the learning process and – in combination with language – is a gateway to abstract and unobservable information. Social learning also facilitates the accumulation of knowledge across generations, helping people and artificial intelligences learn things that no individual could learn in a lifetime.