Skip to content
Licensed Unlicensed Requires Authentication Published by De Gruyter Mouton June 29, 2016

Linguistic, gestural, and cinematographic viewpoint: An analysis of ASL and English narrative

  • Fey Parrill EMAIL logo , Kashmiri Stec , David Quinto-Pozos and Sebastian Rimehaug
From the journal Cognitive Linguistics

Abstract

Multimodal narrative can help us understand how conceptualizers schematize information when they create mental representations of films and may shed light on why some cinematic conventions are easier or harder for viewers to integrate. This study compares descriptions of a shot/reverse shot sequence (a sequence of camera shots from the viewpoints of different characters) across users of English and American Sign Language (ASL). We ask which gestural and linguistic resources participants use to narrate this event. Speakers and signers tended to represent the same characters via the same point of view and to show a single perspective rather than combining multiple perspectives simultaneously. Neither group explicitly mentioned the shift in cinematographic perspective. We argue that encoding multiple points of view might be a more accurate visual description, but is avoided because it does not create a better narrative.

References

Aarons, Debra & Ruth Zilla Morgan. 2003. Classifier predicates and the creation of multiple perspectives in South African Sign Language. Sign Language Studies 3(2). 125–156.10.1353/sls.2003.0001Search in Google Scholar

Akaike, Hirotugu. 1974. A new look at the statistical model identification. IEEE Transactions on Automatic Control 19. 716–723.10.1007/978-1-4612-1694-0_16Search in Google Scholar

Baayen, R. Harald. 2008. Analyzing linguistic data: A practical introduction to statistics using R. Cambridge, UK: Cambridge University Press.10.1017/CBO9780511801686Search in Google Scholar

Baayen, R. Harald, Doug J. Davidson & Douglas M. Bates. 2008. Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language 59(4). 390–412.10.1016/j.jml.2007.12.005Search in Google Scholar

Barr, Dale J., Roger Levy, Christoph Scheepers & Harry J. Tily. 2013. Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language 68(3). 255–278.10.1016/j.jml.2012.11.001Search in Google Scholar

Barriopedro, Maria I. & Juan Botella. 1998. New evidence for the zoom lens model using the RSVP technique. Perception & Psychophysics 60(8). 1406–1414. DOI 10.3758/BF03208001DOI 10.3758/BF03208001Search in Google Scholar

Bates, Douglas, Martin Maechler, Ben Bolker & Steve Walker. 2014. lme4: Linear mixed-effects models using Eigen and S4. R package version 1.1–7. Retrieved from: http://CRAN.R-project.org/package=lme4.Search in Google Scholar

Bordwell, David. 2010. Convention, construction, and cinematic vision. In Brian Boyd, Joseph Carroll & Jonathan Gottschall (eds.), Evolution, literature, and film, 416–432. New York: Columbia University Press.Search in Google Scholar

Bordwell, David & Kristin Thompson. 2006. Film art: An introduction. New York: McGraw-Hill.Search in Google Scholar

Chafe, Wallace. 1976. Givenness, contrastiveness, definiteness, subjects, topics, and point of view. In Charles N. Li & Sandra A. Thompson (eds.), Subject and topic, 27–55. New York: Academic Press.Search in Google Scholar

Chafe, Wallace. 1994. Discourse, consciousness, and time: The flow and displacement of conscious experience in speaking and writing. Chicago: University of Chicago Press.Search in Google Scholar

Cormier, Kearsy, David Quinto-Pozos, Zed Sevcikova & Adam Schembri. 2012. Lexicalisation and de-lexicalisation processes in sign languages: Comparing depicting constructions and viewpoint gestures. Language and Communication 32. 329–348.10.1016/j.langcom.2012.09.004Search in Google Scholar

Corrigan, Timothy. 2007. A short guide to writing about film (6th edn.). London: Pearson Longman.Search in Google Scholar

Crasborn, Onno & Han Sloetjes. 2008. Enhanced ELAN functionality for sign language corpora. In Proceedings of LREC 2008, Sixth International Conference on Language Resources and Evaluation, 39–43.Search in Google Scholar

Dudis, Paul. 2004. Body partitioning and real-space blends. Cognitive Linguistics 15(2). 223–238.10.1515/cogl.2004.009Search in Google Scholar

Emmorey, Karen & Judy S. Riley (eds.). 1995. Language, gesture and space. Hillsdale: Lawrence Erlbaum Publishers.Search in Google Scholar

Grodal, Torben K. & Mette Kramer. 2014. Film, neuroaesthetics, and empathy. In Jon O. Lauring (ed.), An introduction to neuroaesthetics: The neuroscientific approach to aesthetic Experience, artistic creativity, and arts appreciation, 271–291. Copenhagen: Museum Tusculanum.Search in Google Scholar

Harrell, Frank E., Jr. 2014. Hmisc package version 3.14-6. Retrieved from: http://cran.r-project.org/web/packages/Hmisc/index.htmlSearch in Google Scholar

Janzen, Terry. 2012. Two ways of conceptualizing space. In Barbara Dancygier & Eve Sweetser (eds.), Viewpoint in language: A multimodal perspective, 156–174. Cambridge, UK: Cambridge University Press.10.1017/CBO9781139084727.012Search in Google Scholar

Kita, Sotaro & Asli Özyürek. 2003. What does cross-linguistic variation in semantic coordination of speech and gesture reveal? Evidence for an interface representation of spatial thinking and speaking. Journal of Memory & Language 48(1). 16–32.10.1016/S0749-596X(02)00505-3Search in Google Scholar

Lambrecht, Knud. 1994. Information structure and sentence form: Topic, focus, and the mental representations of discourse referents. Cambridge, UK: Cambridge University Press.10.1017/CBO9780511620607Search in Google Scholar

Lillo-Martin, Diane G. 1995. The point of view predicate in American Sign Language. In Karen Emmorey & Judy S. Reilly (eds.), Language, gesture and space, 155–170. Hillsdale, NJ: Lawrence Erlbaum Publishers.Search in Google Scholar

MacWhinney, Brian. 1977. Starting points. Language 53. 152–187.10.2307/413059Search in Google Scholar

MacWhinney, Brian. 2005. The emergence of grammar from perspective taking. In Diana Pecher & Rolf Zwaan (eds.), Grounding cognition: The role of perception and action in memory, language, and thinking, 198–223. Cambridge, UK: Cambridge University Press.10.1017/CBO9780511499968.009Search in Google Scholar

Magliano, Joseph P. & Jeffrey M. Zacks. 2011. The impact of continuity editing in narrative film on event segmentation. Cognitive Science 35. 1489–1517. DOI 10.1111/j.1551–6709.2011.01202.xDOI 10.1111/j.1551–6709.2011.01202.xSearch in Google Scholar

McCleary, Leland & Evani Viotti. 2010. Sign-gesture symbiosis in Brazilian Sign Language narrative. In Fey Parrill, Vera Tobin & Mark Turner (eds.), Meaning, form, and body, 181–201. Palo Alto, CA: CSLI Publications.Search in Google Scholar

McCullough, Karl Erik. 1993. Spatial information and cohesion in the gesticulation of English and Chinese speakers. Paper presented at the 5th Annual Meeting of the American Psychological Society, Chicago, June 25–28.Search in Google Scholar

McNeill, David. 1992. Hand and mind: What gestures reveal about thought. Chicago: University of Chicago Press.Search in Google Scholar

Metzger, Melanie. 1995. Constructed dialogue and constructed action in American Sign Language. In Ceil Lucas (ed.), Sociolinguistics in deaf communities, 255–271. Washington, DC: Gallaudet University Press.Search in Google Scholar

Özyürek, Asli. 2002. Do speakers design their co-speech gestures for their addressees? The effects of addressee location on representational gestures. Journal of Memory & Language 46, 688–704.10.1006/jmla.2001.2826Search in Google Scholar

Özyürek, Asli & Pamela Perniss. 2011. Event representation in sign language: A crosslinguistic perspective. In Jürgen Bohnemeyer & Eric Pederson (eds.), Event representation in language: Encoding events at the language-cognition interface, 84–107. Cambridge, UK: Cambridge University Press.10.1017/CBO9780511782039.005Search in Google Scholar

Parrill, Fey. 2009. Dual viewpoint gestures. Gesture 9(3). 271–289.10.1075/gest.9.3.01parSearch in Google Scholar

Parrill, Fey. 2012. Interactions between discourse status and viewpoint in co-speech gesture. In Barbara Dancygier & Eve Sweetser (eds.), Viewpoint in language: A multimodal perspective, 97–112. Cambridge, UK: Cambridge University Press.10.1017/CBO9781139084727.008Search in Google Scholar

Parrill, Fey. 2010. Viewpoint in speech-gesture integration: Linguistic structure, discourse structure, and event structure. Language and Cognitive Processes 25(5). 650–668.10.1080/01690960903424248Search in Google Scholar

Parrill, Fey, Jennifer Bullen & Huston Hoburg. 2010. Effects of input modality on speech-gesture integration. Journal of Pragmatics 42(11). 3130–3137.10.1016/j.pragma.2010.04.023Search in Google Scholar

Perniss, Pamela M. 2012.Use of sign space. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language: An international handbook, 412–431. Berlin: Mouton de Gruyter.10.1515/9783110261325.412Search in Google Scholar

Perniss, Pamela & Asli Özyürek. 2008. Representations of action, motion and location in sign space: A comparison of German (DGS) and Turkish (TID) Sign Language narratives. In Josep Quer (ed.), Signs of the time: Selected papers from TISLR 8, 353–378. Hamburg: Signum Press.Search in Google Scholar

Perniss, Pamela & Asli Özyürek. 2015. Visible cohesion: A comparison of reference tracking in sign, speech, and co-speech gesture. Topics in Cognitive Science 7. 36–60. DOI 10.1111/tops.12122DOI 10.1111/tops.12122Search in Google Scholar

Pudovkin, Vsevolod I. 1926. Film technique (Reprint, 1970). New York: Evergreen.Search in Google Scholar

Quinto-Pozos, David. 2007. Can constructed action be considered obligatory? Lingua, 117, 1285–1314.10.1016/j.lingua.2005.12.003Search in Google Scholar

Quinto-Pozos, David & Fey Parrill. 2015. Signers and co-speech gesturers adopt similar strategies for portraying viewpoint in narratives. Topics in Cognitive Science 7(1). 12–35. DOI 10.1111/tops.12120.DOI 10.1111/tops.12120Search in Google Scholar

R Core Team. 2014. R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. URL http://www.R-project.org/.Search in Google Scholar

Schembri, Adam. 2003. Rethinking ‘Classifiers’ in signed languages. In Karen Emmorey (ed.), Perspectives on classifier constructions in sign languages, 3–34. Mahwah, NJ: Lawrence Erlbaum Associates.Search in Google Scholar

Schenck, Joseph M. (producer), James W. Horne & Buster Keaton (directors). 1927. College [Motion picture]. USA: United Artists.Search in Google Scholar

So, Wing Chee, Sotaro Kita & Susan Goldin-Meadow. 2009. Using the hands to identify who does what to whom: Gesture and speech go hand-in-hand. Cognitive Science 33. 115–125.10.1111/j.1551-6709.2008.01006.xSearch in Google Scholar

Stec, Kashmiri. 2012. Meaningful shifts: A review of viewpoint markers in co-speech gesture and sign language. Gesture 12(3). 327–360.10.1075/gest.12.3.03steSearch in Google Scholar

Supalla, Ted. 1982. Structure and acquisition of verbs of motion and location in American Sign Language. San Diego, CA: University of California unpublished doctoral dissertation.Search in Google Scholar

Sweetser, Eve. 2012. Introduction: Viewpoint and perspective in language and gesture, from the ground down. In Barbara Dancygier & Eve Sweetser (eds.), Viewpoint in language: A multimodal perspective, 1–22. Cambridge, UK: Cambridge University Press.10.1017/CBO9781139084727Search in Google Scholar

Tomlin, Russell S. 1995. Focal attention, voice and word order. In Pamela Downing & Michael Noonan (eds.), Word order in discourse, 517–552. Amsterdam: John Benjamins.10.1075/tsl.30.18tomSearch in Google Scholar

Zacks, Jeffrey M. & Joseph P. Magliano. 2011. Film, narrative, and cognitive neuroscience. In Francesca Bacci & David P. Melcher (eds.), Art and the senses, 435–454. New York: Oxford University Press.Search in Google Scholar

Zacks, Jeffrey M., Nicole K. Speer, Khena M. Swallow & Corey J. Maley. 2010. The brain’s cutting-room floor: Segmentation of narrative cinema. Frontiers in Human Neuroscience 4(168). 1–15. DOI 10.3389/fnhum.2010.00168DOI 10.3389/fnhum.2010.00168Search in Google Scholar

Zwitserlood, Inge. 2012. Classifiers. In Roland Pfau, Markus Steinbach & Bencie Woll (eds.), Sign language: An international handbook, 158–186. Berlin: Mouton de Gruyter.10.1515/9783110261325.158Search in Google Scholar

Appendix

While gestures were coded as either character viewpoint or observer viewpoint, manual signing was slightly more complex. Because the left and right hands may act independently of each other (e. g., one hand may produce a CL sign while the other produces CA signing), we re-coded the data as shown in Table 4, and used the codes in the Manual Iconic-2 column for all quantitative analyses. There are 112 ASL items. Six of these items were coded as Mixed (Manual Iconic-1), as the signer had produced CA signing with one hand and a CL sign with the other. Rather than delete these items, we counted them twice.

Table 4:

Recoding of ASL data for regression.

Left handRight handManual Iconic-1Manual Iconic-2
CACACACA
CLCLCLCL
CACLMixed- CA
- CL
CAOtherOther_CACA
CLOtherOther_CLCL
otherOtherOther(delete)
Received: 2015-8-14
Revised: 2016-2-5
Revised: 2016-3-4
Accepted: 2016-3-7
Published Online: 2016-6-29
Published in Print: 2016-8-1

©2016 by De Gruyter Mouton

Downloaded on 29.5.2024 from https://www.degruyter.com/document/doi/10.1515/cog-2015-0081/html
Scroll to top button