Functional Near-Infrared Spectroscopy (fNIRS) is a promising method to study functional organization of the prefrontal cortex. However, in order to realize the high potential of fNIRS, effective discrimination between physiological noise originating from forehead skin haemodynamic and cerebral signals is required. Main sources of physiological noise are global and local blood flow regulation processes on multiple time scales. The goal of the present study was to identify the main physiological noise contributions in fNIRS forehead signals and to develop a method (...) for physiological de-noising of fNIRS data. To achieve this goal we combined concurrent time-domain fNIRS and peripheral physiology recordings with wavelet coherence analysis. Depth selectivity was achieved by analyzing moments of photon time-of-flight distributions provided by time-domain fNIRS. Simultaneously, mean arterial blood pressure (MAP), heart rate (HR), and skin blood flow (SBF) on the forehead were recorded. Wavelet coherence analysis was employed to quantify the impact of physiological processes on fNIRS signals separately for different time scales. We identified three main processes contributing to physiological noise in fNIRS signals on the forehead. The first process with the period of about 3 s is induced by respiration. The second process is highly correlated with time lagged MAP and HR fluctuations with a period of about 10 s often referred as Mayer waves. The third process is local regulation of the facial skin blood flow time locked to the task-evoked fNIRS signals. All processes affect oxygenated haemoglobin concentration more strongly than that of deoxygenated haemoglobin. Based on these results we developed a set of physiological regressors, which were used for physiological de-noising of fNIRS signals. Our results demonstrate that proposed de-noising method can significantly improve the sensitivity of fNIRS to cerebral signals. (shrink)
Many studies have shown that behavioral measures are affected by manipulating the imageability of words. Though imageability is usually measured by human judgment, little is known about what factors underlie those judgments. We demonstrate that imageability judgments can be largely or entirely accounted for by two computable measures that have previously been associated with imageability, the size and density of a word’s context and the emotional associations of the word. We outline an algorithmic method for predicting imageability judgments using co-occurrence (...) distances in a large corpus. Our computed judgments account for 58% of the variance in a set of nearly two thousand human imageability judgments, for words that span the entire range of imageability. The two factors account for 43% of the variance in lexical decision reaction times (LDRTs) that is attributable to imageability in a large database of 3697 LDRTs spanning the range of imageability. We document variances in the distribution of our measures across the range of imageability that suggest that they will account for more variance at the extremes, from which most imageability-manipulating stimulus sets are drawn. The two predictors account for 100% of the variance that is attributable to imageability in newly-collected LDRTs using a previously-published stimulus set of 100 items. We argue that our model of imageability is neurobiologically plausible by showing it is consistent with brain imaging data. The evidence we present suggests that behavioral effects in the lexical decision task that are usually attributed to the abstract/concrete distinction between words can be wholly explained by objective characteristics of the word that are not directly related to the semantic distinction. We provide computed imageability estimates for over 29,000 words. (shrink)
A growing body of literature in psychology, linguistics, and the neurosciences has paid increasing attention to the understanding of the relationships between phonological representations of words and their meaning: a phenomenon also known as phonological iconicity. In this article, we investigate how a text’s intended emotional meaning, particularly in literature and poetry, may be reflected at the level of sublexical phonological salience and the use of foregrounded elements. To extract such elements from a given text, we developed a probabilistic model (...) to predict the exceeding of a confidence interval for specific sublexical units concerning their frequency of occurrence within a given text contrasted with a reference linguistic corpus for the German language. Implementing this model in a computational application, we provide a text analysis tool which automatically delivers information about sublexical phonological salience allowing researchers, inter alia, to investigate effects of the sublexical emotional tone of texts based on current findings on phonological iconicity. (shrink)
Facial expressions are used by humans to convey various types of meaning in various contexts. The range of meanings spans basic possibly innate socio-emotional concepts such as ‘surprise’ to complex and culture specific concepts such as ‘carelessly’. The range of contexts in which humans use facial expressions spans responses to events in the environment to particular linguistic constructions within sign languages. In this mini review we summarize findings on the use and acquisition of facial expressions by signers and present a (...) unified account of the range of facial expressions used by positing three dimensions; semantic, iconic and compositional. (shrink)
Talking about emotion and putting feelings into words has been hypothesized to regulate emotion in psychotherapy as well as in everyday conversation. However, the exact dynamics of how different strategies of verbalization regulate emotion and how these strategies are reflected in characteristics of the voice has received little scientific attention. In the present study, we showed emotional pictures to 30 participants and asked them to verbally admit or deny an emotional experience or a neutral fact concerning the picture in a (...) simulated conversation. We used a 2x2 factorial design manipulating the focus (on emotion or facts) as well as the congruency (admitting or denying) of the verbal expression. Analyses of skin conductance response (SCR) and voice during the verbalization conditions revealed a main effect of the factor focus. SCR and pitch of the voice were lower during emotion compared to fact verbalization, indicating lower autonomic arousal. In contradiction to these physiological parameters, participants reported that fact verbalization was more effective in down-regulating their emotion than emotion verbalization. These subjective ratings, however, were in line with voice parameters associated with emotional valence. That is, voice intensity showed that fact verbalization reduced negative valence more than emotion verbalization. In sum, the results of our study provide evidence that emotion verbalization as compared to fact verbalization is an effective emotion regulation strategy. Moreover, based on the results of our study we propose that different verbalization strategies influence valence and arousal aspects of emotion selectively. (shrink)
The comprehension of stories requires the reader to imagine the cognitive and affective states of the characters. The content of many stories is unpleasant, as they often deal with conflict, disturbance or crisis. Nevertheless, unpleasant stories can be liked and enjoyed. In this fMRI study, we used a parametric approach to examine (1) the capacity of increasing negative valence of story contents to activate the mentalizing network (cognitive and affective theory of mind, ToM), and (2) the neural substrate of liking (...) negatively valenced narratives. A set of 80 short narratives was compiled, ranging from neutral to negative emotional valence. For each story mean rating values on valence and liking were obtained from a group of 32 participants in a prestudy, and later included as parametric regressors in the fMRI analysis. Another group of 24 participants passively read the narratives in a 3 Tesla MRI scanner. Results revealed a stronger engagement of affective ToM-related brain areas with increasingly negative story valence. Stories that were unpleasant, but simulatiously liked, selectively engaged the medial prefrontal cortex (mPFC), which might reflect the moral exploration of the story content. Further analysis showed that the more the mPFC becomes engaged during the reading of negatively valenced stories, the more coactivation can be observed in other brain areas related to the neural processing of affective ToM and empathy. (shrink)
We investigated how processing fluency and defamiliarization contribute to the affective and aesthetic processing of reading in an event-related fMRI experiment with 26 participants. We compared the neural correlates of processing (a) familiar German proverbs, (b) unfamiliar proverbs, (c) twisted variations which altered the concept of the original proverb (anti-proverbs), (d) variations with incorrect wording but the same concept as the original proverb (violated proverbs), and (e) non-rhetorical sentences. We report processing differences between anti-proverbs and violated proverbs. Anti-proverbs triggered a (...) process of affective evaluation relying on self-referential thinking and semantic memory in contrast to violated proverbs, which recruited the frontotemporal attention and error detection network. In consistence with the coarse semantic coding theory, proverb familiarity affected lateralization: relative to non-rhetorical sentences highly familiar proverbs activated the left parahippocampal gyrus, whereas unfamiliar proverbs activated an extensive network, covering bilateral frontotemporal cortex. Despite affective processing being enhanced for anti-proverbs, familiar proverbs received the highest beauty ratings. Effects of familiarity and defamiliarization on the aesthetic perception of literature will be discussed. (shrink)
In his review, Walter (2012) links conceptual perspectives on empathy with crucial results of neurocognitive and genetic studies and presents a descriptive neurocognitive model that identifies neuronal key structures and links them with both cognitive and affective empathy via a high and a low road. After discussion of this model, the remainder of this comment deals more generally with the possibilities and limitations of current neurocognitive models, considering ways to develop process models allowing specific quantitative predictions.
This study investigates the neuronal correlates of empathic processing in children aged 4 to 8 years, an age range discussed to be crucial for the development of empathy. Empathy, defined as the ability to understand and share another person’s inner life, consists of two components: affective (emotion-sharing) and cognitive empathy (Theory of Mind). We examined the hemodynamic responses of pre-school and school children (N=48), while they processed verbal (auditory) and non-verbal (cartoons) empathy stories in a passive following paradigm, using functional (...) Near Infrared Spectroscopy (fNIRS). To control for the two types of empathy, children were presented blocks of stories eliciting either affective or cognitive empathy, or neutral scenes which relied on the understanding of physical causalities. By contrasting the activations of the younger and older children, we expected to observe developmental changes in brain activations when children process stories eliciting empathy in either stimulus modality towards a greater involvement of anterior frontal brain regions. Our results indicate that children's processing of stories eliciting affective and cognitive empathy is associated with medial and bilateral orbitofrontal cortex (OFC) activation. In contrast to what is known from studies using adult participants, no additional recruitment of posterior brain regions was observed, often associated with the processing of stories eliciting empathy. Developmental changes were found only for stories eliciting affective empathy with increased activation, in older children, in medial OFC, left inferior frontal gyrus (IFG), and the left dorsolateral prefrontal cortex (dlPFC). Activations for the two modalities differ only little, with non-verbal presentation of the stimuli having a greater impact on empathy processing in children, showing more similarities to adult processing than the verbal one. This might be caused by the fact that non-verbal processing develops earlier in life. (shrink)
To investigate whether second language processing is characterized by the same sensitivity to the emotional content of language – as compared to native language processing – we conducted an EEG study manipulating word emotional valence in a visual lexical decision task. Two groups of late bilinguals – native speakers of German and Spanish with sufficient proficiency in their respective second language - performed each a German and a Spanish version of the task containing identical semantic material: translations of words in (...) the two languages. In contrast to theoretical proposals assuming attenuated emotionality of second language processing, a highly similar pattern of results was obtained across L1 and L2 processing: ERP waves generally reflected an early posterior negativity plus a late positive complex for words with positive or negative valence compared to neutral words regardless of the respective test language and its L1 or L2 status. These results clearly suggest that the coupling between cognition and emotion does not qualitatively differ between L1 and L2 although latencies of respective effects differed about 50ms. Only Spanish native speakers currently living in the L2 country showed no effects for negative as compared to neutral words presented in L2 potentially reflecting a predominant positivity bias in second language processing when currently being exposed to a new culture. (shrink)
Interactive Activation Models (IAMs) simulate orthographic and phonological processes in implicit memory tasks, but they neither account for associative relations between words nor explicit memory performance. To overcome both limitations, we introduce the Associative Read-Out Model (AROM), an IAM extended by an associative layer implementing long-term associations between words. According to Hebbian learning, two words were defined as ‘associated’ if they co-occurred significantly often in the sentences of a large corpus. In a study-test task, a greater amount of associated items (...) in the stimulus set increased the 'yes' response rates of non-learned and learned words. To model test-phase performance, the associative layer is initialized with greater activation for learned than for non-learned items. Because IAMs scale inhibitory activation changes by the initial activation, learned items gain a greater signal variability than non-learned items, irrespective of the choice of the free parameters. This explains why the slope of the z-transformed Receiver-Operating Characteristics (z-ROCs) is lower one during recognition memory. When fitting the model to the empirical z-ROCs, it likewise predicted which word is recognized with which probability at the item-level. Since many of the strongest associates reflect semantic relations to the presented word (e.g., synonymy), the AROM merges form-based aspects of meaning representation with meaning relations between words. (shrink)
Levelt et al. attempt to “model their theory” with WEAVER++. Modeling theories requires a model theory. The time is ripe for a methodology for building, testing, and evaluating computational models. We propose a tentative, five-step framework for tackling this problem, within which we discuss the potential strengths and weaknesses of Levelt et al.'s modeling approach.
Pulvermüller identifies two major flaws of the subtraction method of neuroimaging studies and proposes remedies. We argue that these remedies are themselves flawed and that the cognitive science community badly needs to take initial steps toward a cross-fertilization between mind mappers and cognitive modelers. Such steps could include the development of computational task models that transparently and falsifiably link the input (stimuli) and output (changes in blood flow or brain waves) of neuroimaging studies to changes in information processing activity that (...) is the stuff of cognitive models. (shrink)
Glenberg's conception of “meaning from and for action” is too narrow. For example, it provides no satisfactory account of the “logic of Elfland,” a metaphor used by Chesterton to refer to meaning acquired by being told something. All that we call spirit and art and ecstasy only means that for one awful instant we remember that we forget. G. K. Chesterton (in Gardner 1994, p. 101).