We describe a “centipede’s dilemma” that faces the sciences of human interaction. Research on human interaction has been involved in extensive theoretical debate, although the vast majority of research tends to focus on a small set of human behaviors, cognitive processes, and interactive contexts. The problem is that naturalistic human interaction must integrate all of these factors simultaneously, and grander theoretical mitigation cannot come only from focused experimental or computational agendas. We look to dynamical systems theory as a framework for (...) thinking about how these multiple behaviors, processes, and contexts can be integrated into a broader account of human interaction. By introducing and utilizing basic concepts of self-organization and synergy, we review empirical work that shows how human interaction is flexible and adaptive and structures itself incrementally during unfolding interactive tasks, such as conversation, or more focused goal-based contexts. We end on acknowledging that dynamical systems accounts are very short on concrete models, and we briefly describe ways that theoretical frameworks could be integrated, rather than endlessly disputed, to achieve some success on the centipede’s dilemma of human interaction. (shrink)
We are highly tuned to each other's visual attention. Perceiving the eye or hand movements of another person can influence the timing of a saccade or the reach of our own. However, the explanation for such spatial orienting in interpersonal contexts remains disputed. Is it due to the social appearance of the cue—a hand or an eye—or due to its social relevance—a cue that is connected to another person with attentional and intentional states? We developed an interpersonal version of the (...) Posner spatial cueing paradigm. Participants saw a cue and detected a target at the same or a different location, while interacting with an unseen partner. Participants were led to believe that the cue was either connected to the gaze location of their partner or was generated randomly by a computer, and that their partner had higher or lower social rank while engaged in the same or a different task. We found that spatial cue-target compatibility effects were greater when the cue related to a partner's gaze. This effect was amplified by the partner's social rank, but only when participants believed their partner was engaged in the same task. Taken together, this is strong evidence in support of the idea that spatial orienting is interpersonally attuned to the social relevance of the cue—whether the cue is connected to another person, who this person is, and what this person is doing—and does not exclusively rely on the social appearance of the cue. Visual attention is not only guided by the physical salience of one's environment but also by the mental representation of its social relevance. (shrink)
When two people move in synchrony, they become more social. Yet it is not clear how this effect scales up to larger numbers of people. Does a group need to move in unison to affiliate, in what we term unitary synchrony; or does affiliation arise from distributed coordination, patterns of coupled movements between individual members of a group? We developed choreographic tasks that manipulated movement synchrony without explicitly instructing groups to move in unison. Wrist accelerometers measured group movement dynamics and (...) we applied cross-recurrence analysis to distinguish the temporal features of emergent unitary synchrony and distributed coordination. Participants’ unitary synchrony did not predict pro-social behavior, but their distributed coordination predicted how much they liked each other, how they felt toward their group, and how much they conformed to each other's opinions. The choreography of affiliation arises from distributed coordination of group movement dynamics. (shrink)
Pulvermüller restricts himself to an unnecessarily narrow range of evidence to support his claims. Evidence from neural modeling and behavioral experiments provides further support for an account of words encoded as transcortical cell assemblies. A cognitive neuroscience of language must include a range of methodologies (e.g., neural, computational, and behavioral) and will need to focus on the on-line processes of real-time language processing in more natural contexts.
Corballis's explanation for right-handedness in humans relies heavily on the gestural protolanguage hypothesis, which he argues for by a series of “intuition pumps.” Scrutinizing the mirror system hypothesis and modern gesture as components of the argument, we find that they do not provide the desired evidence of a gestural precursor to speech.
We argue that the strengths of the Theory of Event Coding (TEC) can usefully be applied to a wider scope of cognitive tasks, and tested by more diverse methodologies. When allied with a theory of conceptual representation such as Barsalou's (1999a) perceptual symbol systems, and extended to data from eye-movement studies, the TEC has the potential to address the larger goals of an embodied view of cognition.