Publication type: Conference other
Type of review: Not specified
Title: Automated assessment of writing and pragmatic competence : a critical review
Authors: Gautschi, Curtis
et. al: No
Proceedings: 17th International Pragmatics Conference : abstracts
Page(s): 261
Conference details: 17th International Pragmatics Conference (IPrA), Winterthur (online), 27 June - 2 July 2021
Issue Date: 2021
Publisher / Ed. Institution: International Pragmatics Association (IPrA)
Language: English
Subjects: Automated assessment; Pragmatics; Writing assessment
Subject (DDC): 410.285: Computational linguistics
808: Rhetoric and writing
Abstract: The automated scoring of essays and other written texts (AES), by means of natural language processing tools, is playing an ever-increasing role in the evaluation of writing ability. While able to provide quick, cost-effective and consistent feedback to learners of foreign languages (Shermis & Burstein, 2013), the use of computers to assess writing skills has been met with considerable scepticism (Condon, 2013). Of particular concern is the issue of construct relevance and representation in the automated assessment of writing (Weigle, 2013), which has relied heavily on a narrow range of linguistic features (e.g., McNamara, Crossley & McCarthy, 2010), cohesion markers or complexity indices (e.g., Coh-index; see Allen, Jacovina, & McNamara, 2016). Thus, while aspects of organizational competence (e.g., grammatical, lexical, syntax and text-level organization) are to some degree measurable, the valid measurement of pragmatic competence remains a challenge for AES (e.g., communicative facets including message conveyance, communicative intent, sensitivity to genre, and the socially constructed transfer of meaning - see, Allen et al., 2016; Bridgeman, Powers, Stone & Mollaun, 2011; Deane, 2013). This has been acknowledged by leaders in the high-stakes AES industry in their efforts to improve the measurement of pragmatic aspects in the communicative paradigm for both automated assessment of writing and speaking (e.g., Chen et al., 2018; Evanini, Hauck & Hakuta, 2017). This presentation seeks to problematize the issues raised above. First, an AES tool and CEFR-level prediction algorithm created and experimentally implemented at a major University of Applied Sciences in Switzerland as part of an online English placement test for first-year engineering students (Gautschi, 2020) is critically examined with respect to its design, results and coverage of pragmatic competence aspects. Second the AES tool is placed within the context of current AES research and approaches to modelling pragmatic competence, with a view to critically reviewing: • current scoring model approaches • claims of communicative and pragmatic competence assessment • advancements in scoring accuracy through deep learning / neural network techniques • the reliance on human-machine agreement for validation support. Finally, it is argued that the value of future AES in testing crucially requires the active involvement of linguists and applied linguists to safeguard construct coverage of pragmatic competence in high-stakes assessment of communicative language ability by means of automated writing assessment.
Fulltext version: Published version
License (according to publishing contract): Licence according to publishing contract
Departement: Applied Linguistics
Organisational Unit: Institute of Language Competence (ILC)
Appears in collections:Publikationen Angewandte Linguistik

Files in This Item:
There are no files associated with this item.

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.