Elsevier

Cognition

Volume 141, August 2015, Pages 9-25
Cognition

Parallel language activation and inhibitory control in bimodal bilinguals

https://doi.org/10.1016/j.cognition.2015.04.009Get rights and content

Highlights

  • Bimodal bilinguals co-activate ASL signs during auditory word recognition.

  • Bilinguals with better inhibition show reduced language co-activation.

  • Inhibitory control during word recognition does not depend on perceptual competition.

  • Without perceptual overlap, inhibition can occur earlier for bimodal bilinguals.

Abstract

Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level.

Introduction

Previous research has suggested that bilinguals with two spoken languages may develop selective advantages in nonlinguistic cognitive control abilities compared to monolinguals, for instance in conflict monitoring, conflict resolution, and task-switching (e.g., Bialystok et al., 2004, Bialystok et al., 2008, Hernández et al., 2010, Kushalnagar et al., 2010, Prior and MacWhinney, 2010, Salvatierrra and Rosselli, 2011). One possible explanation for these advantages is that bilinguals engage domain-general cognitive control mechanisms to manage the cognitive demands of bilingual language processing (e.g., Blumenfeld and Marian, 2011, Linck et al., 2012, Pivneva et al., 2012, Prior and Gollan, 2011, Soveri et al., 2011). Over time, growing experience with managing these demands might enhance nonlinguistic cognitive control abilities (e.g., Blumenfeld and Marian, 2013, Luk et al., 2011, Singh and Mishra, 2012).

One such cognitive demand that bilinguals commonly experience is cross-linguistic competition during auditory word recognition. For bilingual listeners, auditory input in one language activates possible word candidates regardless of language membership (e.g., Marian and Spivey, 2003a, Marian and Spivey, 2003b). This input-driven language co-activation is observed across different proficiency levels, ages of onset of language acquisition, and highly diverse language pairs (e.g., Blumenfeld and Marian, 2007, Blumenfeld and Marian, 2013, Canseco-Gonzalez et al., 2010, Cutler et al., 2006, Ju and Luce, 2004, Marian et al., 2008, Weber and Cutler, 2004). Resolving such cross-linguistic competition has been posited to require cognitive inhibition skills (e.g., Green, 1998, Shook and Marian, 2013). For example, Blumenfeld and Marian (2011) showed that, in Spanish–English bilinguals but not monolinguals, efficiency of nonlinguistic conflict resolution (as measured by a spatial Stroop task) was associated with inhibition of English within-language phonological distractors after word identification. The authors suggested that the bilingual participants routinely engage domain-general cognitive control mechanisms to resolve linguistic conflict because they must control activation of a second language, perhaps increasing overall involvement of cognitive control mechanisms during language processing (cf. Mercier, Pivneva, & Titone, 2014). In a more recent study, Blumenfeld and Marian (2013) showed that efficient conflict resolution was indeed associated with how unimodal bilinguals manage between-language activation during auditory word recognition (also see Mercier et al., 2014). For Spanish–English bilinguals, better performance on a nonlinguistic spatial Stroop task was associated with increased cross-linguistic activation during the early stages of word recognition (300–500 ms after word-onset) and decreased cross-linguistic activation during later stages of word recognition (633–767 ms after word-onset). That is, better inhibitory control was associated with earlier cross-linguistic distractor activation, followed by efficient resolution of such competition.

Blumenfeld and Marian, 2011, Blumenfeld and Marian, 2013 suggested that the association between perceptual linguistic competition and Stroop-type inhibition for bilinguals may reflect similar underlying cognitive mechanisms. Specifically, both tasks involve processing bivalent perceptual aspects of the same stimulus (e.g., cat-cap upon hearing ca-), i.e., they represent perceptual conflict. Indeed, neuroimaging studies have shown that the neural substrates for Stroop-type inhibition and bilingual language control are largely shared (Abutalebi, 2008, Liu et al., 2004), and that bilingual experience modulates these neural substrates (e.g., Luk, Anderson, Craik, Grady, & Bialystok, 2010). Bimodal bilinguals (i.e., bilinguals with a spoken and a signed language) do not experience within-modality perceptual competition between their languages. Therefore, the absence of Stroop-type advantages in bimodal bilinguals may serve as additional evidence that recruitment of inhibition in bilingual comprehension is linked to perceptually generated competition (Emmorey, Luk, Pyers, & Bialystok, 2008).

The present study investigates whether the recruitment of Stroop-type inhibitory control mechanisms during bilinguals’ auditory word recognition exclusively depends on perceptual competition in phonological input. We do this by examining the association between nonlinguistic inhibitory control and language co-activation for bimodal bilinguals. For such bilinguals, the two languages have completely distinct, non-overlapping phonological systems, and co-activation through perceptual overlap in linguistic input is therefore not possible. As a result, if bimodal bilinguals engage inhibitory control to resolve cross-linguistic competition, then it suggests that recruitment of cognitive control processes is not exclusively driven by perceptual conflict.

Despite the absence of overlap at the phonological level, there is some evidence for co-activation between a spoken and a signed language during bilingual language processing for deaf and hearing bimodal bilinguals, possibly through top-down conceptual and lateral lexical connections between the two languages (i.e., cross-linguistic competition between lexico-semantic representations). Morford, Wilkinson, Villwock, Piñar, and Kroll (2011) found that phonological overlap between sign translation equivalents affected semantic judgments to written English word pairs in deaf ASL-English bilingual adults. Semantically related word pairs (e.g., apple and onion) were judged more quickly when their ASL sign translation equivalents overlapped in sign phonology (the ASL signs APPLE and ONION overlap in all phonological features except location). Furthermore, semantically unrelated word pairs were judged more slowly when their ASL sign translation equivalents overlapped in sign phonology (also see Kubus et al., 2014, Ormel et al., 2012).

Shook and Marian (2012) examined co-activation of signs during spoken word recognition instead of written word recognition in an eye-tracking study with hearing ASL-English bimodal bilinguals. They used a bilingual visual world paradigm to present participants with spoken words while they were looking at displays with four pictures: the target picture (that matched the spoken word) and three distractor pictures. Some of the displays included a picture of a cross-linguistic phonological distractor, for example a picture of ‘paper’ in a trial with the English target word cheese. Although cheese and paper are phonologically unrelated in English, the ASL signs CHEESE and PAPER share the same location and handshape features and only differ in movement features. ASL-English bilinguals looked more at the cross-linguistic distractor than at unrelated distractors in the first 500 ms post word-onset, suggesting they were co-activating ASL signs in the English listening experiment (see Van Hell, Ormel, Van der Loop, and Hermans (2009) for evidence of co-activation in the opposite direction, that is, spoken word activation during sign processing by sign language interpreters in training).

Given that bimodal bilinguals co-activate spoken and signed lexical items during auditory word recognition in the absence of perceptually driven linguistic competition, they might also engage nonlinguistic inhibitory mechanisms to resolve cross-linguistic competition that originates at the lexico-semantic level. However, direct links between bimodal bilingual language processing and executive function have not been examined and findings regarding possible enhancements in nonlinguistic cognitive control abilities of bimodal bilinguals have been mixed so far.

Emmorey, Luk et al. (2008) compared the performance of hearing ASL-English bilinguals who had learned ASL from an early age as CODAs (i.e., children of deaf adults), unimodal bilinguals who learned two spoken languages from an early age, and English monolinguals on a conflict resolution task (an Eriksen flanker task). The researchers found that, whereas the unimodal bilinguals were faster than the other two groups, the bimodal bilinguals did not differ from the monolinguals, suggesting that (hearing) bimodal bilinguals may not experience the same advantages in cognitive control as unimodal bilinguals. To explain these results, Emmorey, Luk et al. (2008) suggested that the enhanced executive control observed for unimodal bilinguals might stem from the need to attend to and perceptually discriminate between two spoken languages, whereas perceptual cues to language membership are unambiguous for bimodal bilinguals. Furthermore, the researchers argued that the possibility for bimodal bilinguals to produce signs and words concurrently (code-blending) places lower demands on language control than for unimodal bilinguals, because less monitoring is required to ensure that the correct language is being selected.

Indeed, hearing bimodal bilinguals frequently code-blend in conversations with other bimodal bilinguals (Emmorey, Borinstein, Thompson, & Gollan, 2008), and sometimes even in conversations with non-signers (Casey & Emmorey, 2009). Interestingly, bimodal bilinguals prefer code-blending to code-switching, that is, switching between speaking and signing that would likely require inhibition of the non-target language (Emmorey, Borinstein et al., 2008). Emmorey, Petrich, and Gollan (2012) compared picture-naming times for ASL-English code-blends compared to English words and ASL signs alone and found that, although code-blending slowed English production because participants synchronized ASL and English articulatory onsets, code-blending did not slow ASL retrieval. Furthermore, during language comprehension (indexed by a semantic decision task), code-blending facilitated lexical access, as compared to either language alone. Bimodal bilinguals are thus able to simultaneously access lexical signed and spoken items seemingly without additional processing costs. Since cross-linguistic inhibition has been associated with processing costs (e.g., Meuter & Allport, 1999), this suggests that bimodal bilinguals may not inhibit their other language to the same degree as unimodal bilinguals.

Despite evidence against extensive recruitment of inhibitory control during bimodal bilingual language processing, cognitive control may nevertheless guide some aspects of processing in ASL-English bilinguals. For example, Kushalnagar et al. (2010) compared the performance of balanced and unbalanced deaf ASL-English bilingual adults on a selective attention task and an attention-switching task. Whereas the two groups performed similarly on the selective attention task, the balanced bilinguals performed better than the unbalanced bilinguals on the attention-switching task, suggesting that there might be enhancements in cognitive flexibility for bimodal bilinguals who are highly proficient in both languages. However, this study did not include comparison samples of unimodal bilinguals or monolinguals. Another study tested ASL simultaneous interpreter students on a battery of cognitive tests at the beginning of their program and two years later (MacNamara & Conway, 2014). The interpreter students improved on measures of task switching, mental flexibility, psychomotor speed, and on two working memory tasks that required the coordination or transformation of information (but not on working memory tasks requiring the storage and processing of information or on a task measuring perceptual speed). While suggestive of a modulating effect of bimodal bilingual interpreting experience on the cognitive system, this study also did not include monolingual or unimodal bilingual controls, which leaves open the possibility that these improvements came about for reasons other than increased experience with bilingual language management demands.

The aim of the present study was twofold. The primary goal was to investigate competition mechanisms during auditory word recognition in bilinguals (Blumenfeld and Marian, 2013, Mercier et al., 2014), by examining whether bimodal bilinguals engage inhibitory control to resolve cross-linguistic competition between languages without overlap in phonological input. The secondary goal was to replicate findings of parallel language activation in hearing bimodal bilinguals. Although several studies identified co-activation of a signed and a written language in deaf bimodal bilinguals (Kubus et al., 2014, Morford et al., 2011), only one published study so far has shown co-activation between a spoken and a signed language in hearing bimodal bilinguals (Shook & Marian, 2012). To this end, we examined both language co-activation during auditory word recognition and nonlinguistic conflict resolution in a group of hearing ASL-English bilinguals. Further, we then directly linked individual differences in inhibitory control to the degree and time-course of cross-linguistic competition. More specifically, we used a bilingual visual world eye-tracking paradigm to index language co-activation in bimodal bilinguals (based on Shook & Marian, 2012), and a nonlinguistic spatial Stroop task to index inhibitory control ability which can be linked to individual co-activation patterns (Blumenfeld and Marian, 2011, Blumenfeld and Marian, 2013).

If the association between cross-linguistic competition and nonlinguistic inhibitory control abilities during auditory word recognition is exclusively related to underlying similarities in the resolution of perceptually driven conflict, then bimodal bilinguals should not show an association between language co-activation and performance on the spatial Stroop task. In this case, such an association should only be found in the context of cross-linguistic activation between two languages with overlapping phonological input within a single modality (Blumenfeld & Marian, 2013). Alternatively, if inhibitory control mechanisms are recruited to resolve cross-linguistic competition that originates at non-perceptual (e.g., lexico-semantic) levels of processing during auditory word recognition, then bimodal bilinguals are expected to show a similar association between language co-activation and performance on the spatial Stroop task.

Section snippets

Participants

Twenty-seven proficient bilingual users of English and ASL (15 females, mean age = 27.8 years, SD = 8.4) participated in the study. Twenty participants were children of deaf adults (CODAs) and had learned ASL from an early age. The other seven participants had learned ASL as a second language as adults (L2 learners, mean age of exposure: 18.3 years, range = 15–21). Self-rated ASL proficiency (on a 1–7 scale for signing and understanding) and information on language exposure and socioeconomic status (

Results

On the word identification eye-tracking task, overall accuracy was 97.8% correct (SE = 0.2%) for bilingual participants and 97.8% correct (SE = 0.1%) for monolingual participants across critical and filler trials. Accuracy rates for critical trials only were 99.5% (SE = 0.3%) for bilingual participants and 99.6% (SE = 0.2%) for monolingual participants. Mean response time across all trials was 1500 ms (SE = 30 ms) for bilingual participants and 1449 ms (SE = 83 ms) for monolingual participants.

Discussion

Using a bilingual visual world paradigm, we confirmed that hearing ASL-English bimodal bilinguals co-activate ASL signs during English spoken word recognition, replicating Shook and Marian (2012). Critically, we showed that bimodal bilinguals with better nonlinguistic inhibitory control (as measured by a spatial Stroop task) looked less at cross-linguistic distractors, suggesting that they either exhibited less language co-activation and/or resolved such co-activation more quickly. These

Acknowledgments

This research was supported by Rubicon Grant 446-10-022 from the Netherlands Organisation for Scientific Research to MG, San Diego State University Grants Program Grant 242338 and a New Investigator grant from the American Speech, Language and Hearing Foundation to HB, NIH Grant HD047736 to KE and the SDSU Research Foundation, and Grant NICHD R01HD059858 to VM. We would like to thank Michael Meirowitz and Cindy O’Grady for their help, and our participants. Finally, we would like to thank Jared

References (70)

  • K.R. Paap et al.

    There is no coherent evidence for a bilingual advantage in executive processing

    Cognitive Psychology

    (2013)
  • B.S. Peterson et al.

    An event-related functional MRI study comparing interference effects in the Simon and Stroop tasks

    Cognitive Brain Research

    (2002)
  • A. Shook et al.

    Bimodal bilinguals co-activate both languages during spoken comprehension

    Cognition

    (2012)
  • A. Székely et al.

    A new on-line resource for psycholinguistic studies

    Journal of Memory and Language

    (2004)
  • A. Weber et al.

    Lexical competition in non-native spoken-word recognition

    Journal of Memory and Language

    (2004)
  • Bates, D. M., Maechler, M., Bolker, B., & Walker, S. (2013). lme4: Linear mixed-effects models using Eigen and S4. R...
  • E. Bialystok et al.

    Bilingualism, aging, and cognitive control: Evidence from the Simon task

    Psychology and Aging

    (2004)
  • E. Bialystok et al.

    Cognitive control and lexical access in younger and older bilinguals

    Journal of Experimental Psychology: Learning, Memory, and Cognition

    (2008)
  • H.K. Blumenfeld et al.

    Constraints on parallel activation in bilingual spoken language processing: Examining proficiency and lexical status using eye-tracking

    Language and Cognitive Processes

    (2007)
  • H.K. Blumenfeld et al.

    Parallel language activation and cognitive control during spoken word recognition in bilinguals

    Journal of Cognitive Psychology

    (2013)
  • H.K. Blumenfeld et al.

    Cognitive control in bilinguals: Advantages in Stimulus–Stimulus inhibition

    Bilingualism: Language and Cognition

    (2014)
  • M. Brysbaert et al.

    Moving beyond Kučera and Francis: A critical evaluation of current word frequency norms and the introduction of a new and improved word frequency measure for American English

    Behavior Research Methods

    (2009)
  • E. Canseco-Gonzalez et al.

    Carpet or carcel: The effect of age of acquisition and language mode on bilingual lexical access

    Language and Cognitive Processes

    (2010)
  • S. Casey et al.

    Co-speech gesture in bimodal bilinguals

    Language and Cognitive Processes

    (2009)
  • A. Costa et al.

    The time course of word retrieval revealed by event-related brain potentials during overt speech

    Proceedings of the National Academy of Sciences of the United States of America

    (2009)
  • A. De Bruin et al.

    Cognitive advantage in bilingualism: An example of publication bias?

    Psychological Science

    (2015)
  • T. Dijkstra et al.

    The architecture of the bilingual word recognition system: From identification to decision

    Bilingualism: Language and Cognition

    (2002)
  • L.M. Dunn et al.

    Peabody picture vocabulary test-III

    (1997)
  • K. Emmorey et al.

    Bimodal bilingualism

    Bilingualism: Language and Cognition

    (2008)
  • K. Emmorey et al.

    The source of enhanced cognitive control in bilinguals

    Psychological Science

    (2008)
  • J. Festman et al.

    Individual differences in control of language interference in late bilinguals are mainly related to general executive abilities

    Behavioral and Brain Functions

    (2010)
  • M.R. Giezen et al.

    Language co-activation and lexical selection in bimodal bilinguals: Evidence from picture-word interference

    Bilingualism: Language and Cognition

    (2015)
  • D.W. Green

    Mental control of the bilingual lexico-semantic system

    Bilingualism: Language and Cognition

    (1998)
  • F. Grosjean

    Exploring the recognition of guest words in bilingual speech

    Language and Cognitive Processes

    (1988)
  • F. Grosjean

    Studying bilinguals

    (2008)
  • Cited by (64)

    • Automaticity of lexical access in deaf and hearing bilinguals: Cross-linguistic evidence from the color Stroop task across five languages

      2021, Cognition
      Citation Excerpt :

      This approach can mitigate the effects of modality differences since differences due to modality are present in both conditions but slowing due to interference should be above and beyond those modality differences. Due to the lack of shared form similarity in signed and spoken languages, we predicted that deaf bilinguals would exhibit smaller Stroop interference effects than bilinguals whose languages were more similar (see Giezen et al., 2015 for a similar prediction for hearing bilingual signers). The bilingual comparison groups were selected based upon the degree of orthographic similarity of their two languages.

    View all citing articles on Scopus
    View full text