The concept of voluntary motorcontrol(VMC) frequently appears in the neuroscientific literature, specifically in the context of cortically-mediated, intentional motor actions. For cognitive scientists, this concept of VMC raises a number of interesting questions:(i) Are there dedicated, modular-like structures within the motor system associated with VMC? Or (ii) is it the case that VMC is distributed over multiple cortical as well as subcortical structures?(iii) Is there any one place within the so-calledhierarchy of motorcontrol (...) where voluntary movements could be said to originate? And (iv) in the current neurological literature how is the adjective voluntary in VMC being used? These questions are here considered in the context of how higher- and lower-levels of motorcontrol, plan, initiate, coordinate, sequence, and modulate goal-directed motor outputs in response to changing internal and external inputs. Particularly relevant are the conceptual implications of current neurological modeling of VMC concerning causal agency. (shrink)
The emulation theory of representation is developed and explored as a framework that can revealingly synthesize a wide variety of representational functions of the brain. The framework is based on constructs from control theory (forward models) and signal processing (Kalman filters). The idea is that in addition to simply engaging with the body and environment, the brain constructs neural circuits that act as models of the body and environment. During overt sensorimotor engagement, these models are driven by efference copies (...) in parallel with the body and environment, in order to provide expectations of the sensory feedback, and to enhance and process sensory information. These models can also be run off-line in order to produce imagery, estimate outcomes of different actions, and evaluate and develop motor plans. The framework is initially developed within the context of motorcontrol, where it has been shown that inner models running in parallel with the body can reduce the effects of feedback delay problems. The same mechanisms can account for motor imagery as the off-line driving of the emulator via efference copies. The framework is extended to account for visual imagery as the off-line driving of an emulator of the motor-visual loop. I also show how such systems can provide for amodal spatial imagery. Perception, including visual perception, results from such models being used to form expectations of, and to interpret, sensory input. I close by briefly outlining other cognitive functions that might also be synthesized within this framework, including reasoning, theory of mind phenomena, and language. Key Words: efference copies; emulation theory of representation; forward models; Kalman filters; motorcontrol; motor imagery; perception; visual imagery. (shrink)
The Theory of Event Coding (TEC) provides a preliminary account of the interaction between perception and action, which is consistent with several recent findings in the area of motorcontrol. Significant issues require integration and elaboration, however; particularly, distractor interference, automatic motor corrections, internal models of action, and neuroanatomical bases for the link between perception and action.
In his article Grush proposes a potentially useful framework for explaining motorcontrol, imagery, and perception. In our commentary we will address two issues that the model does not seem to deal with appropriately: one concerns motorcontrol, and the other, the visual and motor imagery domains. We will consider these two aspects in turn.
(brain area) to small (dendritic) scales. Further, it is often useful to describe motorcontrol and sensorimotor coordination in terms of external elds such as force elds and sensory images. We survey the basic concepts of eld computation, including both feed-forward eld operations and eld dynamics resulting from recurrent connections. Adaptive and learning mechanisms are discussed brie y. The application of eld computation to motorcontrol is illustrated by several examples: external force elds associated (...) with spinal neurons (Bizzi & Mussa-Ivaldi 1995), population coding of direction in motor cortex (Georgopoulos 1995), continuous transformation of direction elds (Droulez & Berthoz 1991a), and linear gain elds and coordinate transformations in posterior parietal cortex (Andersen 1995). Next we survey some eld-based representations of motion, including direct, Fourier, Gabor and wavelet or multiresolution representations. Finally we consider brie y the application of these representations to constraint satisfaction, which has many applications in motorcontrol. (shrink)
Evidence from optic ataxic patients with bilateral lesions to the superior parietal lobes does not support the view that there are separate planning and control mechanisms located in the IPL and SPL respectively. The aberrant reaches of patients with bilateral SPL damage towards extrafoveal targets seem to suggest a deficit in the selection of appropriate motor programmes rather than a deficit restricted to on-line control.
In this paper I argue that, to make intentional actions fully intelligible, we need to posit representations of action the content of which is nonconceptual. I further argue that an analysis of the properties of these nonconceptual representations, and of their relation- ships to action representations at higher levels, sheds light on the limits of intentional control. On the one hand, the capacity to form nonconceptual representations of goal-directed movements underscores the capacity to acquire executable concepts of these movements, (...) thus allowing them to come under intentional control. On the other hand, the degree of autonomy these nonconceptual representations enjoy, and the specific temporal constraints stemming from their role in motorcontrol, set limits on intentional control over action execution. (shrink)
Spoken language exists because of a remarkable neural process. Inside a speaker’s brain, an intended message gives rise to neural signals activating the muscles of the vocal tract. The process is remarkable because these muscles are activated in just the right way that the vocal tract produces sounds a listener understands as the intended message. What is the best approach to understanding the neural substrate of this crucial motorcontrol process? One of the key recent modeling developments in (...) neuroscience has been the use of state feedback control (SFC) theory to explain the role of the CNS in motorcontrol. SFC postulates that the CNS controls motor output by (1) estimating the current dynamic state of the thing (e.g., arm) being controlled, and (2) generating controls based on this estimated state. SFC has successfully predicted a great range of non-speech motor phenomena, but as yet has not received attention in the speech motorcontrol community. Here, we review some of the key characteristics of speech motorcontrol and what they say about the role of the CNS in the process. We then discuss prior efforts to model the role of CNS in speech motorcontrol, and argue that these models have inherent limitations – limitations that are overcome by an SFC model of speech motorcontrol which we describe. We conclude by discussing a plausible neural substrate of our model. (shrink)
Cognitive impairments are difficult to relate to clinical symptoms in schizophrenia, partly due to insufficient knowledge on how cognitive impairments interact with one another. Here, we devised a new sequential pointing task requiring both visual organization and motor sequencing. Six circles were presented simultaneously on a touch screen around a fixation point. Participants pointed with the finger each circle one after the other, in synchrony with auditory tones. We used an alternating rhythmic 300/600 ms pattern so that participants performed (...) pairs of taps separated by short intervals of 300 ms. Visual organization was manipulated by using line-segments that grouped the circles two by two, yielding three pairs of connected circles, and three pairs of unconnected circles that belonged to different pairs. This led to 3 experimental conditions. In the ‘congruent condition’, the pairs of taps had to be executed on circles grouped by connecters. In the ‘non congruent condition’, they were to be executed on the unconnected circles that belonged to different pairs. In a neutral condition, there were no connecters. 22 patients with schizophrenia with mild symptoms and 22 control participants performed a series of 30 taps in each condition. Tap pairs were counted as errors when the produced rhythm was inverted (expected rhythm 600/300=2; inversed rhythm<1). Error rates in patients with a high level of clinical disorganization were significantly higher in the non-congruent condition than in the two other conditions, contrary to controls and the remaining patients. The tap-tone asynchrony increased in the presence of connecters in both patient groups, but not in the controls. Patients appeared not to integrate the visual organization during the planning phase of action, leading to a large difficulty during motor execution, especially in those patients revealing difficulties in visual organization. Visual motor tapping tasks may help detect those subgroups of patients. (shrink)
We applaud the spirit of MacNeilage's attempts to better explain the evolution and cortical control of speech by drawing on the vast literature in nonhuman primate neurobiology. However, he oversimplifies motor cortical fields and their known individual functions to such an extent that he undermines the value of his effort. In particular, MacNeilage has lumped together the functional characteristics across multiple mesial and lateral motor cortex fields, inadvertantly creating two hypothetical centers that simply may not exist.
Toolmaking requires motor skills that in turn require handedness, so that there is no competition between the two sides of the brain. Thus, handedness is not necessarily linked to vocalization but to the origin of causal beliefs required for making complex tools. Language may have evolved from these processes.
Glover's planning–control model accommodates a substantial number of findings from subjects who have motor deficits as a consequence of brain lesions. A number of consistently observed and robust findings are not, however, explained by Glover's theory; additionally, the claim that the IPL supports planning whereas the SPL supports control is not consistently supported in the literature.
Several target articles in this BBS special issue address the topic of cerebellar and olivary functions, especially as they pertain to motor earning. Another important topic is the neural interaction between the limbic system and the cerebellum during associative learning. In this commentary we present some of our data on olivo-cerebellar and limbic-cerebellar interactions during eyeblink conditioning. [HOUK et al.; SIMPSON et al.; THACH].
Language can impact emotion, even when it makes no reference to emotion states. For example, reading sentences with positive meanings (“The water park is refreshing on the hot summer day”) induces patterns of facial feedback congruent with the sentence emotionality (smiling), whereas sentences with negative meanings induce a frown. Moreover, blocking facial afference with botox selectively slows comprehension of emotional sentences. Therefore, theories of cognition should account for emotion-language interactions above the level of explicit emotion words, and the role of (...) peripheral feedback in comprehension. For this special issue exploring frontiers in the role of the body and environment in cognition, we propose a theory in which facial feedback provides a context-sensitive constraint on the simulation of actions described in language. Paralleling the role of emotions in real-world behavior, our account proposes that 1) facial expressions accompany sudden shifts in well-being as described in language; 2) facial expressions modulate emotion states during reading; and 3) emotion states prepare the reader for an effective simulation of the ensuing language content. To inform the theory and guide future research, we outline a framework based on internal models for motorcontrol. To support the theory, we assemble evidence from diverse areas of research. Taking a functional view of emotion, we tie the theory to behavioral and neural evidence for a role of facial feedback in cognition. Our theoretical framework provides a detailed account that can guide future research on the role of emotional feedback in language processing, and on interactions of language and emotion. It also highlights the bodily periphery as relevant to theories of embodied cognition. (shrink)
Two main approaches can be discerned in the literature on agentive self-awareness: a top-down approach, according to which agentive self-awareness is fundamentally holistic in nature and involves the operations of a central-systems narrator, and a bottom-up approach that sees agentive self-awareness as produced by lowlevel processes grounded in the very machinery responsible for motor production and control. Neither approach is entirely satisfactory if taken in isolation; however, the question of whether their combination would yield a full account of (...) agentive self-awareness remains very much open. In this paper, I contrast two disorders affecting the control of voluntary action: the anarchic hand syndrome and utilization behavior. Although in both conditions patients fail to inhibit actions that are elicited by objects in the environment but inappropriate with respect to the wider context, these actions are experienced in radically different ways by the two groups of patients. I discuss how top-down and bottom-up processes involved in the generation of agentive self-awareness would have to be related in order to account for these differences. (shrink)
This article presents a conceptual discussion on the phenomenon of incorporation of tools and other objects in the light of Maine de Biran’s philosophy of the relation between the body and the motor will. Drawing on Maine de Biran’s view of the body as that portion of the material world which directly obeys one’s motor will, as well as on his view (supported by studies in contemporary cognitive science) of active touch as the perceptual modality that is sensitive (...) to objects as fields of forces resisting the perceiver’s movements, we discuss the phenomena of motor incorporation and haptic incorporation, as well as the relation between them. Motor incorporation occurs when something is integrated into the motor system, i.e. when practice enables one to animate an object as directly, effortlessly and fluently as one is able to animate one’s own body. The subject then has the experience of acting there, where the object is located, not at the body–object interface. In order to better understand the phenomenon of motor incorporation, we highlight the phenomenological difference between directly and indirectly moving something. Haptic incorporation occurs when something is integrated into the haptic system, i.e. when an object is used as an instrument for the haptic perception of other objects. Finally, we seek to shed light on the phenomenon of transparency, understanding the transparency acquired by the incorporated object as both a relational property and a matter of degrees. (shrink)
The paralysis-by-analysis phenomenon, i.e., attending to the execution of one’s movement impairs performance, has gathered a lot of attention over recent years (see Wulf, 2007, for a review). Explanations of this phenomenon, e.g., the hypotheses of constrained action (Wulf and colleagues, e.g., McNevin et al., 2003) or of step-by-step execution (Beilock et al., 2002; Masters, 1992), however, do not refer to the level of underlying mechanisms on the level of sensorimotor control. For this purpose, a “nodal-point hypothesis” is presented (...) here with the core assumption that skilled motor behavior is internally based on sensorimotor chains of nodal points, that attending to intermediate nodal points leads to a muscular re-freezing of the motor system at exactly and exclusively these points in time, and that this re-freezing is accompanied by the disruption of compensatory processes, resulting in an overall decrease of motor performance. Two experiments, on lever sequencing and basketball free throws, respectively, are reported that successfully tested these time-referenced predictions, i.e., showing that muscular activity is selectively increased and compensatory variability selectively decreased at movement-related nodal points if these points are in the focus of attention. (shrink)
In dynamical systems models feedforward is needed to guide planning and to process unknown and unpredictable events. Feedforward could help Theory of Event Coding (TEC) integrate control processes and could model human performance in action planning in a more flexible and powerful way.
This commentary focuses on issues related to Glover's suppositions regarding the information available to the on-line control system and the behavioral consequences of (visual) information disruption. According to the author, a “highly accurate,” yet temporally unstable, visual representation of peripersonal space is available for real-time trajectory corrections. However, no direct evidence is currently available to support the position.
I here present some doubts about whether Mandik’s (2010) proposed intermediacy and recurrence constraints are necessary and sufficient for agentive experience. I also argue that in order to vindicate the conclusion that agentive experience is an exclusively perceptual phenomenon (Prinz, 2007), it is not enough to show that the predictions produced by forward models of planned motor actions are conveyed by mock sensory signals. Rather, it must also be shown that the outputs of “comparator” mechanisms that compare these predictions (...) against actual sensory feedback are also coded in a perceptual representational format. (shrink)
An artificial neural network called reaCog is described which is based on a decentralized, reactive and embodied architecture to control non-trivial hexapod walking in unpredictable environment (Walknet) as well as insect-like navigation (Navinet). In reaCog, these basic networks are extended in such a way that the complete system, reaCog, adopts the capability of inventing new behaviors and - via internal simulation - of planning ahead. This cognitive expansion enables the reactive system to be enriched with additional procedures. Here, we (...) focus on the question to what extent properties of phenomena to be characterized on a different level of description as for example consciousness can be found in this minimally cognitive system. Adopting a monist view, we argue that the phenomenal aspect of mental phenomena can be neglected when discussing the function of such a system. Under this condition, reaCog is discussed to be equipped with properties as are bottom-up and top-down attention, intentions, volition and some aspects of Access Consciousness. These properties have not been explicitly implemented but emerge from the cooperation between the elements of the network. The aspects of access consciousness found in reaCog concern the above mentioned ability to plan ahead and to invent and guide (new) actions. Furthermore, global accessibility of memory elements, another aspect characterizing Access Consciousness is realized by this network. reaCog allows for both reactive/automatic control and (access-) conscious control of behavior. We discuss examples for interactions between both the reactive domain and the conscious domain. Metacognition or Reflexive Consciousness is not a property of reaCog. Possible expansions are discussed to allow for further properties of Access Consciousness, verbal report on internal states, and for Metacognition. In summary, we argue that already simple networks allow for properties of consciousness if leaving the phenomenal aspect aside. (shrink)
A model of gestural sequencing in speech is proposed that aspires to producing biologically plausible fluent and efficient movement in generating an utterance. We have previously proposed a modification of the well-known task dynamic implementation of articulatory phonology such that any given articulatory movement can be associated with a quantification of effort (Simko & Cummins, 2010). To this we add a quantitative cost that decreases as speech gestures become more precise, and hence intelligible, and a third cost component that places (...) a premium on the duration of an utterance. Together, these three cost elements allow us to derive algorithmically optimal sequences of gestures and dynamical parameters for generating articulator movement. We show that the optimized movement displays many timing characteristics that are representative of real speech movement, capturing subtle details of relative timing between gestures. Optimal movement sequences also display invariances in timing that suggest syllable-level coordination for CV sequences. We explore the behavior of the model as prosodic context is manipulated in two dimensions: clarity of articulation and speech rate. Smooth, fluid, and efficient movements result. (shrink)
I provide an analysis of the concept of an “affordance” that enables one to conceive of “structural affordance” as a kind of affordance relation that might hold between an agent and its body. I then review research in the science of humanoid bodily movement to indicate the empirical reality of structural affordance.
Perception is the foundation of cognition and is fundamental to our beliefs and consequent action planning. The Editorial (this issue) asks: “what mechanisms, if any, mediate between perceptual and cognitive processes?” It has recently been argued that attention might furnish such a mechanism. In this paper, we pursue the idea that action planning (motor preparation) is an attentional phenomenon directed towards kinaesthetic signals. This rests on a view of motorcontrol as active inference, where predictions of proprioceptive (...) signals are fulfilled by peripheral motor reflexes. If valid, active inference suggests that attention should not be limited to the optimal biasing of perceptual signals in the exteroceptive (e.g. visual) domain but should also bias proprioceptive signals during movement. Here, we test this idea using a classical attention (Posner) paradigm cast in a motor setting. Specially, we looked for decreases in reaction times when movements were preceded by valid relative to invalid cues. Furthermore, we addressed the hierarchical level at which putative attentional effects were expressed by independently cueing the nature of the movement and the hand used to execute it. We found a significant interaction between the validity of movement and effector cues on reaction times. This suggests that attentional bias might be mediated at a low level in the motor hierarchy, in an intrinsic frame of reference. This finding is consistent with attentional enabling of top-down predictions of proprioceptive input and may rely upon the same synaptic mechanisms that mediate directed spatial attention in the visual system. (shrink)
The present study investigated the enhancement effects of an external focus-of-attention (FOA) in the context of a manual-tracking task, in which participants tracked both visible and occluded targets. Three conditions were compared, which manipulated the distance of the FOA from the participant as well as the external/internal dimension. As expected, an external FOA resulted in lower tracking errors than an internal FOA. In addition, analyses of participants' movement patterns revealed a systematic shift toward higher-frequency movements in the external FOA condition, (...) consistent with the idea that an external FOA exploits the natural movement dynamics available during skilled action. Finally, target visibility did not influence the effect of focused attention on tracking performance, which provides evidence for the proposal that the mechanisms that underlie FOA do not depend directly on vision. (shrink)