Skip to content
BY 4.0 license Open Access Published by De Gruyter Mouton October 15, 2020

Assessing pragmatic aspects of L2 communication: Why, how and what for

  • Sara Gesuato

    Sara Gesuato earned her PhDs from Padua University and the University of California at Berkeley. She is associate professor at Padua University, Italy, where she teaches English language and linguistics. Her research fields include pragmatics, discourse and genre analysis, and pragmatics and corpus linguistics. Her works have investigated the structure and wording of oral and written initiating and reacting speech acts, and explored pedagogical applications of speech act analysis (e.g. Mixed methods in raising sociopragmatic awareness: A proposal for combining insights from the teacher’s feedback and the interlocutor’s point of view, System, 2018; Can you Tell a Move When you Encounter One? Identifying Clues to Communicative Functions, 2019, Brill Rodopi). She has recently co-edited Doing things with words across time: snapshots of communicative practices of and from the past (Lingue e Linguaggi, 2019, with Marina Dossena and Daniela Cesiri) and one The Literature-Linguistics Interface – Bridging the Gap: Between Qualitative and Quantitative Approaches to Literary Texts (Umanistica Digitale, 2019, with Rocco Coronato).

    and Erik Castello

    Erik Castello is associate professor of English language and linguistics in the Department of Linguistic and Literary Studies at the University of Padua, Italy. His research interests include learner corpus linguistics, corpus pragmatics and English language teaching and testing. He has examined aspects of written and spoken learner language, including the use of discourse markers, backchannels and it-extraposition constructions. He has recently published several articles on these topics (e.g. Holding up one’s end of the conversation in spoken English: Lexical backchannels in L2 examination discourse, International Journal of Learner Corpus Research, Benjamins, 2019, with Sara Gesuato) and co-edited a volume on Learner Corpus Research (Studies in Learner Corpus Linguistics: Research and Applications for Foreign Language Teaching and Assessment, Peter Lang, 2015, with Katherine Ackerley and Francesca Coccetta).

From the journal Lodz Papers in Pragmatics

1 Introduction

Assessment can be defined as expressing a value judgement on something/someone, that is, as explicitly indicating where a given person/thing stands in terms of their intrinsic and/or perceived qualities. It is a multi-faceted phenomenon not only because it may focus on the emotional reaction that the object of assessment may determine, the properties it displays as a member of a given category, and/or its social-normative adequacy and appropriacy in a given context, [2] but also because it is an act of reflection and communication, combining a careful consideration of the object of assessment and the expression of the opinion formed and attitude developed as a result of that careful examination (Hunston 1994: 191). Assessment is also a practice that affects interpersonal relationships. Since it consists in taking a favourable or unfavourable stand on what is being assessed, and thus conveying a positive or negative description of it, it may, respectively, enhance or threaten the positive face of the person whose behaviour or work is being assessed. Finally, assessment may impact the scope of action of the recipient of assessment. That is, positive assessment may entitle them to a given right and/or encourage them to take a future course of action, while negative assessment may involve depriving them of that right or discourage them from embarking on a given plan. Therefore the outcome of assessment has implications for their negative face too.

In general terms, the rationale of assessment may be said to comprise at least three aspects: raising awareness (informing), affecting behaviour (determining future courses of action), and allocating resources (assigning rewards). First, assessment makes explicit what something is worth: it reveals or clarifies its value with regard to given standards. In this respect, it is an interpretive description of the object of assessment, which provides insights into its nature, strengths and weaknesses. Second, it is a way to determine how suitable or successful the object of assessment is with respect to the purposes it is supposed to serve. This can then serve as the basis for deciding whether to maintain the object of assessment in its present state or whether, and in what respects, to modify it. Third, the information gathered through assessment may be used to decide whether and how to reward or penalise the recipient of assessment. The outcome of assessment can thus serve as positive reinforcement or negative punishment.

Of the three above-mentioned aspects of assessment, the first, that is raising awareness, tends to be the focus of linguistic research as a means to the end of better accounting of patterns in language and language use. Indeed, as a source of information about a linguistic phenomenon, it involves defining (i.e. identifying by delimiting) its object (e.g. a genre); detailing the features that are more likely to accurately reveal its value (e.g. sequencing of topics); establishing the criteria, or comparable objects of assessment, against which to assess those features (e.g. cohesion).

But all three aspects of assessment are relevant to language education. [3] Indeed, first and foremost, assessment “serves to gather information about students’ understanding and skills” (i.e. for instructional purposes; Cheng and Fox 2017: 7). Second, as a way to highlight how successful learners’ and/or teachers’ performance may be, assessment is meant to monitor and influence behaviour (i.e. assessment of and for learning; Cheng and Fox 2017: 4) so that later study habits and pedagogical interventions may be suitably planned for future good, or even better, performance (Black and Wiliam 1998: 2; see, e.g. Ishihara 2010). [4] Finally, the outcome of assessment on learners’ and teachers’ performance may serve to record and ratify the validity of the object of assessment (i.e. assessment for administrative purposes; Cheng and Fox 2017: 8), and to reward the behaviour the stakeholders involved (e.g. good marks for students and good standing for teachers), thus having interpersonal and also social effects (Messick 1989). [5] These last two aspects are called assessment decisions in language education (Taylor and Nolen 2008). [6]

Assessment in linguistic research and language education is a challenging enterprise. The main reason is that the object of assessment, namely language, is a composite construct, organised at several levels simultaneously (e.g. grammar, lexis, meaning, letters/sounds). The task becomes harder when it comes to assessing overall communication skills (i.e. language use), because additional variables come into play (e.g. structure, amount of content, rhetorical strategies) as relevant to the context of communication, and contribute to the degree of success of an interactional event.

The adequacy of an interactional event depends on the participants’ pragmatic skills, that is, “the ability to use language effectively in order to achieve a specific purpose and to understand a language in context” (Thomas 1983: 92). It is based on “knowledge of the appropriate contextual use of the particular language’s linguistic resources” (Barron 2003: 10), which is put into practice in social interaction in adherence to shared values and established practices. This goal-oriented receptive and productive interactional activity, which produces effects (e.g. (mis)understanding, social harmony/friction) that matter to communication participants, is shaped by socio-cultural conventions. These are norms of interaction, which people are socialised into as members of given socio-cultural communities, and which often operate below the level of consciousness.

Assessment of pragmatic skills, therefore, involves describing and evaluating not only what language is used by interactants, but also how it is used, why and what for, with whom and when (cf. Bardovi-Harlig 2013: 68), how it is adapted across contexts, and with what effects. It is thus a way to determine in what ways and to what extent communication succeeds or fails from the point of view of language users (cf. Crystal 1997: 301) who are motivated by real-world interactional-transactional goals.

Although still relatively understudied (see Sydorenko et al. 2014: 20), the assessment of pragmatic skills is becoming a growing area of research (e.g. Roever 2011) and pedagogy (e.g. Hudson, Detmer and Brown 1995), which has led to the design and development of test batteries of learners’ pragmatic competence (e.g. Roever 2005) and also methods for gauging pragmatic skills such as DCTs, multiple choice tasks, retrospective verbal reports (e.g. Hinkel 1997; Cohen 2004). There are, however, at least three sub-fields that are still especially under-explored: the assessment of extensive discourse (but see, e.g. Sydorenko et al. 2014), teacher-based assessment of learners’ pragmatic skills in the classroom (but see, e.g. Ishihara 2009, 2010), and perception studies on the effects of discourse on the addressee (but see, e.g. Wolfe et al. 2016).

Given the vastness of the field or pragmatics, on the one hand, and the multi-facetedness of assessment on the other, each contribution is bound to be selective, that is, focused on specific pragmatic aspects. Thus, pragmatics assessment research may target different types of discursive behaviour like errors (e.g. Janopoulos 1992; Beason 2001; Wolfe et al. 2016) or speech acts, including apologies (e.g. Tajeddin and Alemi 2014), refusals (e.g. Alemi and Tajeddin 2013) and compliment responses (e.g. Alemi, Eslami and Rezanejad 2014). It can also be relevant to different competences, namely pragmatic-declarative knowledge (e.g. Bardovi-Harlig and Dörnyei 1998); metapragmatic-“reflective” knowledge and pragmatic ability or procedural knowledge (e.g. Ishihara 2009). It may be oriented toward the analysis of language users’ productive and/or receptive communicative skills (e.g. Koike 1989), as well as toward their ability to judge the acceptability of given discursive events (e.g. Bardovi-Harlig and Dörnyei 1998). Also, it may explore the technical (de)merits of language production in terms of its linguistic and discursive features (e.g. Krulatz 2015; Taguchi 2006) and/or its contextual effects, that is, its cognitive, emotional and behavioural reactions (e.g. Janopoulos 1992), and/or the connection between the two (e.g. Scher and Darley 1997). Pragmatic assessment may consider the value of communicative practices from the point of researchers, who want to be able to account for discursive behaviour (research assessment; e.g. Bektas-Cetinkaya 2012), or that of teachers, who need to provide feedback to students at the end of a teaching-learning cycle (classroom assessment; e.g. Ishihara 2009). Alternatively, it may analyse the design, implementation, characteristics and effects of the assessment process itself (e.g. Alemi and Khanlarzadeh 2017). For example, it may examine the assessment practices of teachers (e.g. Alcón 2015), other experts (e.g. Härmälä 2010; Sirikhan and Prapphal 2011), ordinary language users (e.g. Culpeper et al. 2010; Schauer 2017; Chen and Liu 2016), or learners/trainees (e.g. Ishihara 2010). Finally, it may focus on the degree of suitability and reliability of different types of rating instruments, such as rating scales (e.g. Youn 2018), comparisons of texts (e.g. Wolfe et al. 2016), open-ended comments (e.g. Economidou-Kogetsidis 2015), the variety of rating criteria adopted: positive traits like appropriateness (e.g. Hacking 2008), negative traits like unacceptability (e.g. Bektas-Cetinkaya 2012) and neutral traits such as phrasing (e.g. Chen and Liu 2016).

The ultimate goal of pragmatics assessment research is making assessment accurate, fair and useful to all stakeholders involved. The present issue is a small contribution to these goals.

To sum up, implementing and validating suitable assessment procedures for gauging learners’ pragmatic competence and performance is crucial for both research and teaching purposes, yet it is fraught with difficulties. Research on pragmatics assessment strives to maximise the accurateness, fairness, reliability, validity and usefulness of assessment instruments and methods for the benefit of all the stakeholders involved. This special issue of Łodz Papers in Pragmatics represents a small contribution to this strand of research.

2 On this special issue

Motivated by the above considerations, we held an international conference – Exploring and Assessing Pragmatic Aspects of L1 and L2 Communication: From Needs Analysis through Monitoring to Feedback (Dept. of Linguistic and Literary Studies, University of Padua, Italy, 25-27 July 2018) – with the goal of promoting a focused reflection on the description, exploration and assessment of pragmatic competence across registers, text types and contexts. The participants discussed topics including how teacher (non)nativeness may influence the teaching of target-language pragmatics, through how to foster EFL teacher trainees’ pragmatic awareness, and how to approach the assessment of L2 language learners’ pragmatic appropriateness. This issue of Lodz Papers in Pragmatics, titled Assessing pragmatic aspects of L2 communication: reflections and practices, includes four conference presentations as well as two papers authored by scholars who, being strongly interested in the conference themes, generously accepted to contribute to our publication project.

The issue opens with a paper by Andrew D. Cohen, “Issues in the assessment of L2 pragmatics”, which provides an overview of current issues in the assessment of pragmatics, an increasingly important yet not well-established area of investigation (Cohen 2019). The author discusses the abilities and communicative practices that should be assessed in L2 pragmatics (e.g. fluency, sociolinguistics), the factors that might influence pragmatic behaviour (e.g. L1 background, prosody, dysfluency), and the trade-off between the feasibility of obtaining pragmatic data by means of a given method (e.g. DCTs, oral production) and its relevance to pragmatic assessment. Cohen also makes the important distinction between assessing pragmatics for research purposes vs for classroom instruction. With regard to the former, he examines the benefits of mixed methods (i.e. combining qualitative and quantitative approaches) and of data elicitation procedures (e.g. naturalistic data, data elicited through DCT), and the importance of choosing the norms to evaluate the appropriateness of a given pragmatic performance. These norms include the identification of a specific variety of English (e.g. British English, ELF), the degree of rater calibration and consistency, and the judgement of experts in a given domain (e.g. tourism). As regards the assessment of pragmatics for classroom instruction, Cohen discusses face validity, that is, the extent to which language learners perceive a given assessment method as valid and enjoyable, and the value of collecting verbal report data from respondents as a means of validating the assessment measures. The author concludes by calling for more collaboration between instructors and learners, with a view to giving more prominence to the assessment of pragmatics in the classroom.

Karen Glaser’s study “Assessing the L2 pragmatic awareness of non-native EFL teacher candidates: Is spotting a problem enough?” focuses on language learner awareness of grammatical (in)accuracies and pragmatic (in)felicities. Replicating and adapting Bardovi-Harlig and Dörnyei’s (1998) study, Glaser administered a metalinguistic judgement questionnaire to 84 German advanced EFLs who were training to become primary school English instructors. The participants were presented with 15 scenarios, the last part of which might contain a pragmatically incorrect item, a grammatically incorrect one or no problem at all. They were asked to indicate instances of incorrectness and/or inappropriateness, to identify the nature of the grammatical vs pragmatic violation, if present, and to suggest a repair. By applying Flöck and Pfingsthorn’s (2017) Signal Detection Matrix, she reported participants’ Hits, Misses, False Alarms and Correct Rejections. The participants correctly identified inaccuracies, infelicities and unproblematic sentences 75% of the time, being the strongest in recognising unproblematic utterances, and the least strong in recognising grammatical errors. On the other hand, while they successfully repaired most grammatical errors, they had difficulties repairing pragmatic infelicities, creating new problems in the process. Her analysis shows that: correct problem identification could not necessarily be equated with adequate repair abilities, at least with pragmatic problems; particularly challenging were situations exemplifying excessive politeness and formality; and for both the grammar and the pragmatics items, the responses varied considerably across individual situations. The author argues that, when comparing ‘grammar’ to ‘pragmatics’ situations, it is crucial to examine the specific phenomena involved, since their respective, highly variable challenges may influence the overall findings. She also suggests that it may be useful to assess learners’ recognition and repair of overpolite/formal situations, which also illustrate pragmatic infelicities, and concludes that non-native English-speaking trainee teachers may benefit from focused training in pragmatic awareness and production.

In their paper “Rater variation in pragmatic assessment: The impact of the linguistic background on peer-assessment and self-assessment”, Sunni L. Sonnenburg-Winkler, Zohreh R. Eslami and Ali Derakhshan investigate the effect of language learners’ L1 backgrounds on both self-assessment and peer-assessment of pragmatic aspects of learner production (e.g. directness, politeness, formality). The authors had 10 MA level students from different linguistic backgrounds studying ESL in the US complete two DCTs. The students were then asked to assess their own responses, those of their peers, and, finally, to provide an explanation for their decisions. Overall, the raters tended to give similar ratings to the same samples, and raters from the same language background showed a higher level of agreement than raters from different language backgrounds. When assessing their peers, most raters tended to evaluate samples by participants sharing the same L1 in a similar way. When assessing themselves, the learners were sometimes more lenient than when assessing their peers, although findings were quite varied, showing no distinctive patterns. In line with previous research, this study indicates that there may be a link between linguistic background and rater scoring patterns. The authors encourage future research on the influence of raters’ personal characteristics on the reliability of their ratings.

Bárbara Eizaga-Rebollar and Cristina Heras-Ramírez’s contribution “Assessing pragmatic competence in oral proficiency interviews at the C1 level with the new CEFR descriptors” analyses how the updated descriptors of the CEFR at the C1 level define pragmatic competence. It then explores the extent to which the CEFR descriptions of pragmatic competence are operationalised in two popular Oral Proficiency Interviews (OPIs) at the C1 level, namely Cambridge’s Certificate in Advanced English (CAE) and Trinity’s Integrated Skills in English (ISE) III. In particular, CAE focuses mostly on discourse competence and fluency, thus aligning closely with the CEFR, while ISE III prioritises functional competence, which includes speaker meaning and propositional precision functional competence. The findings show that pragmatic competence is a recurring aspect in the descriptors of the scales of both OPIs, even though, in both cases, it does not feature as a distinct assessment criterion, but is part of L2 speaking proficiency. At the same time, it appears that both tests fail to accommodate all aspects of pragmatic competence and that there is a mismatch between the task competences and the rating scale competences. Finally, sample analyses of assessment practice in both OPIs reveal that examiners’ ratings do not always appear to be directly motivated by the tests’ descriptors. The authors conclude with some recommendations for examiner training and construct validity. These include: investigating the aspects of pragmatic competence in the scales to which examiners give more importance in their ratings; checking whether these coincide with those that examiners take into account in their ratings; and defining the proficiency threshold required for test-takers to be considered pragmatically competent at the C1 level.

The last two articles in this issue turn the reader’s attention to the pragmatic competence of Chinese learners of English and that of English learners of Chinese. In her “Developing pragmatic competence in English academic discussions: An EAP classroom investigation”, Marcella Caprario investigates the development of pragmatic competence among advanced EAP students at an English-medium University in China, who were attending a semester-long EAP course. The focus of the course was the academic discussion, and one of its overt objectives was developing the ability to interact with group members effectively and respectfully. An explicit-inductive approach was adopted for providing instruction in the sociopragmatics and pragmalinguistics of English-language academic discussions. Throughout the semester, the students engaged in ongoing reflective writing, which was meant to make them aware of their process of developing pragmatic competence. The reflective writing of five students was qualitatively examined through template analysis (Hanks 2017). The analysis revealed some key issues faced by the students (e.g. lack of clarity when speaking), their causes (e.g. limited linguistic competence), and the corrective steps taken (e.g. better time management). Content analysis also brought to the fore the impact of students’ emotional lives on their learning and performance, with negative emotions causing hesitation or avoidance of oral participation, but at times also acting as a catalyst for change after an unsatisfactory performance. The results show that self-reflection was useful for the students to take ownership of their own learning process, and for the instructor to notice communal and individual needs to be addressed with targeted instruction. Caprario concludes that teaching pragmatic competence in academic discussions can foster collaborative teaching and learning, favour the development of students’ critical thinking skills, and empower learners to develop autonomy.

In their paper “Evaluating the appropriacy of Ritual Frame Indicating Expressions (RFIEs) – A case study of learners of Chinese and English”, Juliane House and Dániel Z. Kádár set out to study RFIEs, that is, conventionalised expressions by means of which the speaker expresses his/her awareness of rights and obligations (Goffman 1967). Specifically, they investigate the equivalence and contextual appropriateness of the Chinese RFIE 请 (qing) and its English counterpart please, as well as that of the Chinese RFIE 对不起(duibuqi) and of the corresponding English expression sorry. They administered a questionnaire to and conducted follow-up interviews with seven British learners of Mandarin Chinese and even Chinese learners of British English. They asked the learners to evaluate a series of appropriate and inappropriate uses of these RFIEs in the target languages along dimensions such as formality and politeness. The results revealed linguacultural differences between the two groups. On the one hand, most British respondents were better at identifying the appropriate uses of the RFIEs than the inappropriate ones, and to be influenced by stereotypes in their answers. On the other hand, the Chinese respondents tended to apply their own cultural views to the evaluation of the target language RFIEs. Implications are drawn for teaching and learning pragmatic aspects of the target languages and for successful intercultural communication.

The contributions to this issue illustrate some of the many directions in which the various aspects of pragmatic skills assessment can be explored. They show that not only various facets of assessment need to be investigated, but also that they can be approached by using a variety of quantitative and qualitative methods, which often fruitfully complement each other. Their findings, obtained following rigorous analytical procedures, lead us to a better understanding of assessment and raise new questions worth exploring in future studies.


1 The first author wrote Section 1 and the second author Section 2.


About the authors

Sara Gesuato

Sara Gesuato earned her PhDs from Padua University and the University of California at Berkeley. She is associate professor at Padua University, Italy, where she teaches English language and linguistics. Her research fields include pragmatics, discourse and genre analysis, and pragmatics and corpus linguistics. Her works have investigated the structure and wording of oral and written initiating and reacting speech acts, and explored pedagogical applications of speech act analysis (e.g. Mixed methods in raising sociopragmatic awareness: A proposal for combining insights from the teacher’s feedback and the interlocutor’s point of view, System, 2018; Can you Tell a Move When you Encounter One? Identifying Clues to Communicative Functions, 2019, Brill Rodopi). She has recently co-edited Doing things with words across time: snapshots of communicative practices of and from the past (Lingue e Linguaggi, 2019, with Marina Dossena and Daniela Cesiri) and one The Literature-Linguistics Interface – Bridging the Gap: Between Qualitative and Quantitative Approaches to Literary Texts (Umanistica Digitale, 2019, with Rocco Coronato).

Erik Castello

Erik Castello is associate professor of English language and linguistics in the Department of Linguistic and Literary Studies at the University of Padua, Italy. His research interests include learner corpus linguistics, corpus pragmatics and English language teaching and testing. He has examined aspects of written and spoken learner language, including the use of discourse markers, backchannels and it-extraposition constructions. He has recently published several articles on these topics (e.g. Holding up one’s end of the conversation in spoken English: Lexical backchannels in L2 examination discourse, International Journal of Learner Corpus Research, Benjamins, 2019, with Sara Gesuato) and co-edited a volume on Learner Corpus Research (Studies in Learner Corpus Linguistics: Research and Applications for Foreign Language Teaching and Assessment, Peter Lang, 2015, with Katherine Ackerley and Francesca Coccetta).

References

Alcón, Eva. 2015. Teachers’ perceptions of email requests: insights for teaching pragmatics in study abroad contexts. In Sara Gesuato, Francesca Bianchi & Winnie Cheng (eds.), Teaching, learning and investigating pragmatics: principles, methods and practices 9–26. Newcastle upon Tyne: Cambridge Scholars Publishing.Search in Google Scholar

Alemi, Minno, Zohreh R.Eslami & Atefeh Rezanejad. 2014. Rating EFL learners’ interlanguage pragmatic competence by non-native English speaking teachers. Procedia – Social and Behavioral Sciences 98. 171–174.10.1016/j.sbspro.2014.03.403Search in Google Scholar

Alemi, Minoo & Neda Khanlarzadeh. 2017. Native and Non-native Teachers’ Pragmatic Criteria for Rating Request Speech Act: The Case of American and Iranian EFL Teachers. Applied Research on English Language 6(1). 67–84.Search in Google Scholar

Alemi, Minoo & Zia Tajeddin. 2013. Pragmatic rating of L2 refusal: Criteria of native and nonnative English teachers. TESL Canada Journal/Revue TESL du Canada 30(7). 63–81.10.18806/tesl.v30i7.1152Search in Google Scholar

The Appraisal Website. The language of attitude, arguability and interpersonal positioning http://www.grammatics.com/appraisal (accessed 31 August, 2020).Search in Google Scholar

Bardovi-Harlig, Kathleen. 2013. Developing L2 pragmatics. Language Learning – A Journal of Research in English Studies 63, Supplement 1. 68–86.10.1111/j.1467-9922.2012.00738.xSearch in Google Scholar

Bardovi-Harlig, Kathleen & Zoltan Dörnyei. 1998. Do language learners recognize pragmatic violations? Pragmatic versus grammatical awareness in instructed L2 learning. TESOL Quarterly 32(2). 233–262.10.2307/3587583Search in Google Scholar

Barron, Anne. 2003. Acquisition in interlanguage pragmatics. Learning how to do things with words in a study abroad context Amsterdam/Philadelphia: John Benjamins Publishing Company.10.1075/pbns.108Search in Google Scholar

Beason, Larry. 2001. Ethos and error: How business people react to error. College Composition and Communication 53(1). 33–64.10.2307/359061Search in Google Scholar

Bektas-Cetinkaya, Yesim. 2012. Pre-service EFL teachers: pragmatic competence. The Turkish case. International journal of Language studies 6(2). 107–122.Search in Google Scholar

Black, Paul & Dylan Wiliam. 1998. Assessment and classroom learning. Assessment in Education 5(1). 7–74.10.4135/9781446250808.n2Search in Google Scholar

Chen, Yuan-shan & Jianda Liu. 2016. Constructing a scale to assess L2 written speech act performance: WDCT and e-mail tasks. Language Assessment Quarterly 13(3). 231–250.10.1080/15434303.2016.1213844Search in Google Scholar

Cheng, Liying & Janna Fox. 2017. Why do we assess? In Liying Cheng & Janna Fox (eds.), Assessment in the Language Classroom. Applied Linguistics for the Language Classroom 1–29. London: Palgrave.10.1057/978-1-137-46484-2Search in Google Scholar

Cohen, Andrew D. 2004. Assessing speech acts in a second language. In Diana Boxer & Andrew D. Cohen (eds.), Studying speaking to inform second language learning 302–327. Clevedon, England: Multilingual Matters.Search in Google Scholar

Cohen, Andrew D. 2019. Considerations in assessing pragmatic appropriateness in spoken language. Language Teaching 1–20.10.1017/S0261444819000156Search in Google Scholar

Crystal, David. 1997. English as a global language Cambridge: Cambridge University Press.Search in Google Scholar

Culpeper, Jonathan, Leyla Marti, Meilian Mei, Minna Nevala & Gila Schauer. 2010. Cross-cultural variation in the perception of impoliteness: A study of impoliteness events reported by students in England, China, Finland, Germany and Turkey. Intercultural Pragmatics 7(4). 597–624.10.1515/iprg.2010.027Search in Google Scholar

Economidou-Kogetsidis, Maria. 2015. Teaching email politeness in the EFL/ESL classroom. ELT Journal 69(4). 415–424.10.1093/elt/ccv031Search in Google Scholar

Flöck, Ilka & Joanna Pfingsthorn. 2014. Pragmatik und Englischunterricht [Pragmatics and English language teaching]. In Wolfgang Gehring & Matthias Merkl (eds.), Englisch lehren, lernen, erforschen [Teaching, learning and researching English] 175–199. Oldenburg: BIS-Verlag.Search in Google Scholar

Goffman, Erving. 1967. Interaction ritual. Essays on face-to-face behavior Garden City, N.Y.: Doubleday.Search in Google Scholar

Hacking, Jane F. 2008. Socio-pragmatic competence in Russian: How input is not enough. In Stacey L. Katz & Johanna Watzinger-Tharp (eds.), Conceptions of L2 grammar: Theoretical approaches and their application in the L2 classroom 110–125. AAUSC.Search in Google Scholar

Hanks, Judith. 2017. Integrating research and pedagogy: An exploratory practice approach. System 68. 38–49.10.1016/j.system.2017.06.012Search in Google Scholar

Härmälä, Marita. 2010. Linguistic, sociolinguistic, and pragmatic competence as criteria in assessing vocational language skills: the case of Finland. Melbourne Papers in Language Testing 15(2). 1–43.Search in Google Scholar

Hinkel, Eli. 1997. Appropriateness of advice: DCT and multiple choice data. Applied Linguistics 18(1). 1–26.10.1093/applin/18.1.1Search in Google Scholar

Hudson, Thom, Emily Detmer & James D. Brown. 1995. Developing prototypic measures of cross-cultural pragmatics. Technical Report No. 7 Honolulu, HI: University of Hawai’i at Manoa, Second Language Teaching & Curriculum Center.Search in Google Scholar

Hunston, Susan. 1994. Evaluation and organization in a sample of written academic discourse. In Malcolm Coulthard (ed.), Advances in written text analysis 191–218. London: Routledge.Search in Google Scholar

Ishihara, Noriko. 2009. Teacher-based assessment for foreign language pragmatics. TESOL Quarterly 43(3). 445–470.10.1002/j.1545-7249.2009.tb00244.xSearch in Google Scholar

Ishihara, Noriko. 2010. Assessing learners’ pragmatic ability in the classroom. In Donna Tatsuki & Noel R. Houck (eds.), Pragmatics: Teaching Speech Acts 209–227. Alexandria, VA: Teachers of English to Speakers of Other Languages.Search in Google Scholar

Janopoulos, Michael. 1992. University faculty tolerance of NS and NNS writing errors: A comparison. Journal of Second Language Writing 1(2). 109–121.10.1016/1060-3743(92)90011-DSearch in Google Scholar

Kohn, Alfie. 2011. The case against grades. Educational Leadership no pagination. Available at: https://www.alfiekohn.org/article/case-grades/ (accessed 31 August, 2020).Search in Google Scholar

Koike, Dale April. 1989. Pragmatic competence and adult L2 acquisition: Speech acts in interlanguage. The Modern Language Journal 73(3). 279–289.10.1111/j.1540-4781.1989.tb06364.xSearch in Google Scholar

Krulatz, Anna. 2015. Judgments of politeness in Russian: How non-native requests are perceived by native speakers. Intercultural Communication Studies XXIV(1). 103–122.Search in Google Scholar

Messick, Samuel. 1989. Meaning and values in test validation: The science and ethics of assessment, Educational Researcher 18(2). 5–11.10.3102/0013189X018002005Search in Google Scholar

Roever, Carsten. 2005. Testing ESL pragmatics Frankfurt: Peter Lang.10.3726/978-3-653-04780-6Search in Google Scholar

Roever Carsten. 2011. Testing of second language pragmatics: Past and future. Language Testing 28(4). 463–481.10.1177/0265532210394633Search in Google Scholar

Schauer, Gila A. 2017. “It's really insulting to say something like that to anyone”: An investigation of English and German native speakers’ impoliteness perceptions. In Istvan Kecskes & Stavros Assimakopoulos (eds.), Current Issues in Intercultural Pragmatics 207–227. Amsterdam/Philadelphia: John Benjamins Publishing Company.10.1075/pbns.274.10schSearch in Google Scholar

Scher, Steven J. & John M. Darley 1997. How effective are the things people say to apologize? Effects of the realization of the apology speech act. Journal of Psycholinguistic Research 26(1). 127–140.10.1023/A:1025068306386Search in Google Scholar

Sirikhan, Sonporn & Kanchana Prapphal. 2011. Assessing pragmatic ability of Thai hotel management and tourism students in the context of hotel front office department. Asian EFL Journal Professional Teaching Articles Volume 53. 72–94.Search in Google Scholar

Sydorenko, Tetyana, Carson Maynard & Erin Guntly. 2014. Rater behaviour when judging language learners’ pragmatic appropriateness in extended discourse. TESL Canada Journal/Revue TESL du Canada 32(1). 19–41.10.18806/tesl.v32i1.1197Search in Google Scholar

Taguchi, Naoko. 2006. Analysis of appropriateness in a speech act of request. Pragmatics 16(4). 513–533.10.1075/prag.16.4.05tagSearch in Google Scholar

Tajeddin, Zia & Minoo Alemi. 2014. Criteria and bias in native English teachers’ assessment of L2 pragmatic appropriacy: Content and FACETS analyses. Asia-Pacific Education Research 23(3). 425–434.10.1007/s40299-013-0118-5Search in Google Scholar

Taylor, Catherine S. & Susan B. Nolen 2008 Classroom assessment: Supporting teaching and learning in real classrooms 2nd Edition. London: Pearson.Search in Google Scholar

Thomas, Jenny. 1983. Cross-cultural pragmatic failure. Applied Linguistics 4(2). 91–112.10.1093/applin/4.2.91Search in Google Scholar

Wolfe, Joanna, Nisha Shanmugaraj & Jaclyn Sipe. 2016. Grammatical versus pragmatic error: Employer perceptions of nonnative and native English speakers. Business and Professional Communication Quarterly 79(4). 397–415.10.1177/2329490616671133Search in Google Scholar

Youn, Soo Young. 2018. Rater variability across examinees and rating criteria in paired speaking assessment. Papers in Language Testing and Assessment 7(1). 32–60.Search in Google Scholar

Published Online: 2020-10-15
Published in Print: 2020-07-28

© 2020 Walter de Gruyter GmbH, Berlin/Boston

This work is licensed under the Creative Commons Attribution 4.0 International License.

Downloaded on 25.4.2024 from https://www.degruyter.com/document/doi/10.1515/lpp-2020-0001/html
Scroll to top button