Harris proposes a new theory of communication, beginning with the premise that the mental life of an individual should be conceived of as a continuous attempt to integrate the present with the past and future.
There has long been interest in why languages are shaped the way they are, and in the relationship between signlanguage and gesture. In sign languages, entity classifiers are handshapes that encode how objects move, how they are located relative to one another, and how multiple objects of the same type are distributed in space. Previous studies have shown that hearing adults who are asked to use only manual gestures to describe how objects move in space will (...) use gestures that bear some similarities to classifiers. We investigated how accurately hearing adults, who had been learning British SignLanguage for 1–3 years, produce and comprehend classifiers in locative and distributive constructions. In a production task, learners of BSL knew that they could use their hands to represent objects, but they had difficulty choosing the same, conventionalized, handshapes as native signers. They were, however, highly accurate at encoding location and orientation information. Learners therefore show the same pattern found in sign-naïve gesturers. In contrast, handshape, orientation, and location were comprehended with equal accuracy, and testing a group of sign-naïve adults showed that they too were able to understand classifiers with higher than chance accuracy. We conclude that adult learners of BSL bring their visuo-spatial knowledge and gestural abilities to the tasks of understanding and producing constructions that contain entity classifiers. We speculate that investigating the time course of adult signlanguage acquisition might shed light on how gesture became conventionalized during the genesis of sign languages. (shrink)
We describe how a Community-Based Participatory Research (CBPR) process was used to develop a means of discussing end-of-life care needs of Deaf seniors. This process identified a variety of communication issues to be addressed in working with this special population. We overview the unique linguistic and cultural characteristics of this community and their implications for working with Deaf individuals to provide information for making informed decisions about end-of-life care, including completion of health care directives. Our research and our work with (...) members of the Deaf community strongly show that communication and presentation of information should be in American SignLanguage, the language of Deaf citizens. (shrink)
This essay offers an interpretation and partial defense of Nietzsche's idea that moralities and moral judgments are “sign-languages” or “symptoms” of our affects, that is, of our emotions or feelings. According to Nietzsche, as I reconstruct his view, moral judgments result from the interaction of two kinds of affective responses: first, a “basic affect” of inclination toward or aversion from certain acts, and then a further affective response to that basic affect. I argue that Nietzsche views basic affects asnoncognitive, (...) that is, as identifiable solely by how they feel to the subject who experiences the affect. By contrast, I suggest that meta-affects sometimes incorporate acognitivecomponent like belief. After showing how this account of moral judgment comports with a reading of Nietzsche's moral philosophy that I have offered in previous work, I conclude by adducing philosophical and empirical psychological reasons for thinking that Nietzsche's account of moral judgment is correct. (shrink)
Sign languages exhibit all the complexities and evolutionary advantages of spoken languages. Consequently, sign languages are problematic for a theory of language evolution that assumes a gestural origin. There are no compelling arguments why the expanding spiral between protosign and protospeech proposed by Arbib would not have resulted in the evolutionary dominance of sign over speech.
Here, a moral case is presented as to why sign languages such as Auslan should be made compulsory in general school curricula. Firstly, there are significant benefits that accrue to individuals from learning signlanguage. Secondly, signlanguage education is a matter of justice; the normalisation of signlanguage education and use would particularly benefit marginalised groups, such as those living with a communication disability. Finally, the integration of sign languages into the (...) curricula would enable the flourishing of Deaf culture and go some way to resolving the tensions that have arisen from the promotion of oralist education facilitated by technologies such as cochlear implants. There are important reasons to further pursue policy proposals regarding the prioritisation of signlanguage in school curricula. (shrink)
Since the beginning of signed language research, the linguistic units have been divided into conventional, standard and fixed signs, all of which were considered as the core of the language, and iconic and productive signs, put at the edge of language. In the present paper, we will review different models proposed by signed language researchers over the years to describe the signed lexicon, showing how to overcome the hierarchical division between standard and productive lexicon. Drawing from (...) the semiotic insights of Peirce we proposed to look at signs as a triadic construction built on symbolic, iconic, and indexical features. In our model, the different iconic, symbolic, and indexical features of signs are seen as the three sides of the same triangle, detectable in the single linguistic sign. The key aspect is that the dominance of the feature will determine the different use of the linguistic unit, as we will show with examples from different discourse types. (shrink)
The study of signed languages has inspired scientific' speculation regarding foundations of human language. Relationships between the acquisition of signlanguage in apes and man are discounted on logical grounds. Evidence from the differential hreakdown of signlanguage and manual pantomime places limits on the degree of overlap between language and nonlanguage motor systems. Evidence from functional magnetic resonance imaging reveals neural areas of convergence and divergence underlying signed and spoken languages.
Researchers in the fields of signlanguage and gesture studies frequently present their participants with video stimuli showing actors performing linguistic signs or co-speech gestures. Up to now, such video stimuli have been mostly controlled only for some of the technical aspects of the video material, leaving open the possibility that systematic differences in video stimulus materials may be concealed in the actual motion properties of the actor’s movements. Computer vision methods such as OpenPose enable the fitting of (...) body-pose models to the consecutive frames of a video clip and thereby make it possible to recover the movements performed by the actor in a particular video clip without the use of a point-based or markerless motion-tracking system during recording. The OpenPoseR package provides a straightforward and reproducible way of working with these body-pose model data extracted from video clips using OpenPose, allowing researchers in the fields of signlanguage and gesture studies to quantify the amount of motion pertaining only to the movements performed by the actor in a video clip. These quantitative measures can be used for controlling differences in the movements of an actor in stimulus video clips or, for example, between different conditions of an experiment. In addition, the package also provides a set of functions for generating plots for data visualization, as well as an easy-to-use way of automatically extracting metadata from large sets of video files. (shrink)
There are two main approaches to the problem of donkey anaphora (e.g. If John owns a donkey , he beats it ). Proponents of dynamic approaches take the pronoun to be a logical variable, but they revise the semantics of quantifiers so as to allow them to bind variables that are not within their syntactic scope. Older dynamic approaches took this measure to apply solely to existential quantifiers; recent dynamic approaches have extended it to all quantifiers. By contrast, proponents of (...) E-type analyses take the pronoun to have the semantics of a definite description (with it ≈ the donkey, or the donkey that John owns ). While competing accounts make very different claims about the patterns of coindexation that are found in the syntax, these are not morphologically realized in spoken languages. But they are in signlanguage, namely through locus assignment and pointing. We make two main claims on the basis of ASL and LSF data. First, signlanguage data favor dynamic over E-type theories: in those cases in which the two approaches make conflicting predictions about possible patterns of coindexation, dynamic analyses are at an advantage. Second, among dynamic theories, signlanguage data favor recent ones because the very same formal mechanism is used irrespective of the indefinite or non-indefinite nature of the antecedent. Going beyond this debate, we argue that dynamic theories should allow pronouns to be bound across negative expressions, as long as the pronoun is presupposed to have a non-empty denotation. Finally, an appendix displays and explains subtle differences between overt signlanguage pronouns and all other pronouns in examples involving ‘disjunctive antecedents’, and suggests that counterparts of signlanguage loci might be found in spoken language. (shrink)
How does signlanguage compare with gesture, on the one hand, and spoken language on the other? Sign was once viewed as nothing more than a system of pictorial gestures without linguistic structure. More recently, researchers have argued that sign is no different from spoken language, with all of the same linguistic structures. The pendulum is currently swinging back toward the view that sign is gestural, or at least has gestural components. The goal (...) of this review is to elucidate the relationships among signlanguage, gesture, and spoken language. We do so by taking a close look not only at how sign has been studied over the past 50 years, but also at how the spontaneous gestures that accompany speech have been studied. We conclude that signers gesture just as speakers do. Both produce imagistic gestures along with more categorical signs or words. Because at present it is difficult to tell where sign stops and gesture begins, we suggest that sign should not be compared with speech alone but should be compared with speech-plus-gesture. Although it might be easier to blur the distinction between sign and gesture, we argue that distinguishing between sign and gesture is essential to predict certain types of learning and allows us to understand the conditions under which gesture takes on properties of sign, and speech takes on properties of gesture. We end by calling for new technology that may help us better calibrate the borders between sign and gesture. (shrink)
Two central assumptions of current models of language acquisition were addressed in this study: (1) knowledge of linguistic structure is "mapped onto" earlier forms of non-linguistic knowledge; and (2) acquiring a language involves a continuous learning sequence from early gestural communication to linguistic expression. The acquisition of the first and second person pronouns ME and YOU was investigated in a longitudinal study of two deaf children of deaf parents learning American SignLanguage (ASL) as a first (...)language. Personal pronouns in ASL are formed by pointing directly to the addressee (YOU) or self (I or ME), rather than by arbitrary symbols. Thus, personal pronouns in ASL resemble paralinguistic gestures that commonly accompany speech and are used prelinguistically by both hearing and deaf children beginning around 9 months. This provides a means for investigating the transition from prelinguistic gestural to linguistic expression when both gesture and language reside in the same modality.\nThe results indicate that deaf children acquired knowledge of personal pronouns over a period of time, displaying errors similar to those of hearing children despite the transparency of the pointing gestures. The children initially (ages 10 and 12 months) pointed to persons, objects, and locations. Both children then exhibited a long avoidance period, during which one function of the pointing gesture (pointing to self and others) dropped out completely. During this period their language and cognitive development were otherwise entirely normal, and they continued to use other types of pointing (e.g., to objects). When pointing to self and others returned, it was marked with errors typical of hearing children; one child exhibited consistent pronoun reversal errors, thinking the YOU point referred to herself, while the other child exhibited reversal errors inconsistently. Evidence from experimental tasks conducted with the first child revealed that pronoun errors occurred in comprehension as well. Full control of the ME and YOU pronouns was not achieved until 25-27 months, around the same time when hearing children master these forms. Thus, the study provides evidence for a discontinuity in the child's transition from prelinguistic to linguistic communication. It is argued that aspects of linguistic structure and its acquisition appear to involve distinct, language-specific knowledge. (shrink)
This paper presents a study of modality in Iranian SignLanguage from a cognitive perspective, aimed at analyzing two linguistic channels: facial and manual. While facial markers and their grammatical functions have been studied in some sign languages, we have few detailed analyses of the facial channel in comparison with the manual channel in conveying modal concepts. This study focuses on the interaction between manual and facial markers. A description of manual modal signs is offered. Three facial (...) markers and their modality values are also examined: squinted eyes, brow furrow, and downward movement of lip corners. In addition to offering this first descriptive analysis of modality in ZEI, this paper also applies the Cognitive Grammar model of modality, the Control Cycle, and the Reality Model, classifying modals into two kinds, effective and epistemic. It is suggested that effective control, including effective modality, tends to be expressed on the hands, while facial markers play an important role in marking epistemic assessment, one manifestation of which is epistemic modality. ZEI, like some other sign languages, exhibits an asymmetry between the number of manual signs and facial markers expressing epistemic modality: while the face can be active in the expression of effective modality, it is commonly the only means of expressing epistemic modality. By positing an epistemic core in effective modality, Cognitive Grammar provides a theoretical basis for these findings. (shrink)
Grounding refers to expressions that establish a connection between the ground and the content evoked by a nominal or finite clause. In this paper we report on two grammatical implementations of nominal grounding in Argentine SignLanguage: pointing and placing. For pointing constructions, we also examine distal-proximal pointing and directive force. We introduce the concept of placing, in which a sign is produced at a specific meaningful location in space. Two types of placing are discussed: Placing-for-Creating, in (...) which a new meaningful location is created, and Placing-by-Recruiting, which recruits an existing meaningful location. We suggest that our analysis of pointing and placing provides an account of nominal grounding unified by general cognitive principles as described within the theory of Cognitive Grammar. Pointing is known to occur in all signed languages studied to date. Although previously undocumented, we suggest that placing is also common to many, perhaps all, signed languages. (shrink)