Single cell recordings in monkeys provide strong evidence for an important role of the motor system in action understanding. This evidence is backed up by data from studies of the (human) mirror neuron system using neuroimaging or TMS techniques, and behavioral experiments. Although the data acquired from single cell recordings are generally considered to be robust, several debates have shown that the interpretation of these data is far from straightforward. We will show that research based on single-cell recordings allows for (...) unlimited content attribution to mirror neurons. We will argue that a theoretical analysis of the mirroring process, combined with behavioral and brain studies, can provide the necessary limitations. A complexity analysis of the type of processing attributed to the mirror neuron system can help formulate restrictions on what mirroring is and what cognitive functions could, in principle, be explained by a mirror mechanism. We argue that processing at higher levels of abstraction needs assistance of non-mirroring processes to such an extent that subsuming the processes needed to infer goals from actions under the label ?mirroring? is not warranted. (shrink)
Research has shown that observers automatically align their attention with another’s gaze direction. The present study investigates whether inferring another’s attended location affects the observer’s attention in the same way as observing their gaze direction. In two experiments, we used a laterally oriented virtual human head to prime one of two laterally presented targets. Experiment 1 showed that, in contrast to the agent with closed eyes, observing the agent with open eyes facilitated the observer’s alignment of attention with the primed (...) target location. Experiment 2, where either sunglasses or occluders concealed the agent’s eye direction, showed that only the agent with the sunglasses facilitated the observer’s alignment of attention with the target location. Taken together, the data demonstrate that head orientation alone is not sufficient to trigger a shift in the observer’s attention, that gaze direction is crucial to this process, and that inferring the region to which another person is attending does facilitate the alignment of attention. (shrink)
According to embodied theories of language (ETLs), word meaning relies on sensorimotor brain areas, generally dedicated to acting and perceiving in the real world. More specifically, words denoting actions are postulated to make use of neural motor areas, while words denoting visual properties draw on the resources of visual brain areas. Therefore, there is a direct correspondence between word meaning and the experience a listener has had with a word's referent on the brain level. Behavioral and neuroimaging studies have provided (...) evidence in favor of ETLs; however, recent studies have also shown that sensorimotor information is recruited in a flexible manner during language comprehension (e.g., Raposo et al. ; Van Dam et al., ), leaving open the question as to what level of language processing sensorimotor activations contribute. In this study, we investigated the time course of modality-specific contributions (i.e., the contribution of action information) as to word processing by manipulating both (a) the linguistic and (b) the action context in which target words were presented. Our results demonstrate that processes reflecting sensorimotor information play a role early in word processing (i.e., within 200 ms of word presentation), but that they are sensitive to the linguistic context in which a word is presented. In other words, when sensorimotor information is activated, it is activated quickly; however, specific words do not reliably activate a consistent sensorimotor pattern. (shrink)
Many studies have suggested that the motor system is organized in a hierarchical fashion, around the prototypical end location associated with using objects. However, most studies supporting the hierarchical view have used well-known actions and objects that are highly over-learned. Accordingly, at present it is unclear if the hierarchical principle applies to learning the use of novel objects as well. In the present study we found that when learning to use a novel object subjects acquired an action representation of the (...) end location associated with using the object, as evidenced by slower responses in an action observation task, when the object was presented at an incorrect end location. By showing the importance of knowledge about end locations when learning to use a novel object, the present study suggests that end locations are a fundamental organizing feature of the human motor system. (shrink)
Although applauding Pickering & Garrod's (P&G's) attempt to ground language use in the ideomotor perception-action link, which provides an of embodied social interaction, we suggest that it needs to be complemented by an additional control mechanism that modulates its operation in the service of the language users' communicative intentions. Implications for intergroup relationships and intercultural communication are discussed.
People cannot understand intentions behind observed actions by direct simulation, because goal inference is highly context dependent. Context dependency is a major source of computational intractability in traditional information-processing models. An embodied embedded view of cognition may be able to overcome this problem, but then the problem needs recognition and explication within the context of the new, layered cognitive architecture.
We focus on Byrne & Russon's argument that program-level imitation is driven by hierarchically organized goals, and the related claim that to establish whether observed behavior is evidence of program-level imitation, empirical studies of imitation must use multi-stage actions as imitative tasks. We agree that goals play an indispensable role in the generation of action and imitative behavior but argue that multi-goal tasks, not only multi-stage tasks, reveal program-level imitation.
ABSTRACTWhen processing information about human faces, we have to integrate different sources of information like skin colour and emotional expression. In 3 experiments, we investigated how these features are processed in a top-down manner when task instructions determine the relevance of features, and in a bottom-up manner when the stimulus features themselves determine process priority. In Experiment 1, participants learned to respond with approach-avoidance movements to faces that presented both emotion and colour features. For each participant, only one of these (...) two features was task-relevant while the other one could be ignored. In contrast to our predictions, we found better learning of task-irrelevant colour when emotion was task-relevant than vice versa. Experiment 2 showed that the learning of task-irrelevant emotional information was improved in general when participants’ awareness was increased by adding NoGo-trials. Experiment 3 replicated these results... (shrink)
A dual-code model of number processing needs to take into account the difference between a number symbol and its meaning. The transition of automatic non-abstract number representations into intentional abstract representations could be conceptualized as a translation of perceptual asemantic representations of numerals into semantic representations of the associated magnitude information. The controversy about the nature of number representations should be thus related to theories on embodied grounding of symbols.
The present commentary considers the question of what must be learned in different types of motor skills, thereby limiting the question of what should be adjusted in the APG model in order to explain successful learning. It is concluded that an open loop model like the APG might well be able to describe the learning pattern of motor skills in a stable, predictable environment. Recent research on saccadic plasticity, however, illustrates that motor skills performed in an unpredictable environment depend heavily (...) on sensory (mostly visual) feedback, [HOUK et al.]. (shrink)
Behavioural and neuroscientific research has provided evidence for a strong functional link between the neural motor system and lexical?semantic processing of action-related language. It remains unclear, however, whether the impact of motor actions is restricted to online language comprehension or whether sensorimotor codes are also important in the formation and consolidation of persisting memory representations of the word's referents. The current study now demonstrates that recognition performance for action words is modulated by motor actions performed during the retention interval. Specifically, (...) participants were required to learn words denoting objects that were associated with either a pressing or a twisting action (e.g., piano, screwdriver) and words that were not associated to actions. During a 6?8-minute retention phase, participants performed an intervening task that required the execution of pressing or twisting responses. A subsequent recognition task revealed a better memory for words that denoted objects for which the functional use was congruent with the action performed during the retention interval (e.g., pepper mill?twisting action, doorbell?pressing action) than for words that denoted objects for which the functional use was incongruent. In further experiments, we were able to generalize this effect of selective memory enhancement of words by performing congruent motor actions to an implicit perceptual (Experiment 2) and implicit semantic memory test (Experiment 3). Our findings suggest that a reactivation of motor codes affects the process of memory consolidation and emphasizes therefore the important role of sensorimotor codes in establishing enduring semantic representations. The authors thank Pascal de Water and Gerard van Oijen for technical support and Markus van Ackeren for help in creating stimulus material. The authors would also like to thank Eelco van Dongen, Michael Masson, Art Glenberg, and Diane Pecher for helpful comments on an earlier draft of this manuscript. The study was supported by the Dutch Organization for Scientific Research NWO-VICI grant (453-05-001) to the third author and NWO-VENI grant (016-094-053) to the last author. (shrink)
Three experiments investigated the nature of visuo-auditory crossmodal cueing in a triadic setting: participants had to detect an auditory signal while observing another agent’s head facing one of the two laterally positioned auditory sources. Experiment 1 showed that when the agent’s eyes were open, sounds originating on the side of the agent’s gaze were detected faster than sounds originating on the side of the agent’s visible ear; when the agent’s eyes were closed this pat-tern of responses was reversed. Two additional (...) experiments showed that the results were sensitive to whether participants could infer a hearing function on the part of the agent. When no ear was depicted on the agent, only a gaze-side advantage was observed , but when the agent’s ear was covered , an ear side advantage was observed only when hearing could still be inferred but not when hearing was inferred to be diminished . The findings are discussed in the context of inferential and simulation processes and joint attention mechanisms. (shrink)