|Title:||General proficiency speaking test assessment criteria : a validation study|
|Authors :||Gautschi, Curtis|
|Conference details:||GSCP Gruppo di studio sulla comunicazione parlata international conference, Napoli, Italy 13-15 June 2016|
|License (according to publishing contract) :||Licence according to publishing contract|
|Type of review:||Not specified|
|Subjects :||Foreign language ability; Test validation; General proficiency testing; Speaking comptetence|
|Subject (DDC) :||371: Schools and their activities |
401.9: Psycholinguistics and sociolinguistics
|Abstract:||Given the important role foreign language ability can play in the personal mobility and employment opportunities of non-native speakers, there is naturally great demand for accurate measurement and certification of such ability by all interested parties. General Proficiency testing (GPT), as the predominant form of foreign language assessment in both public schools and international language assessment bodies, has developed to meet this need. The assessment criteria used in GP speaking tests, however, have not been developed based on the empirical study of the judgments of those found in the actual contexts where language will be used in communication. Rather, currently operationalized GP tests make use of assessment criteria developed based on language-expert intuition and the judgments of language professionals (McNamara 1996, Fulcher 2003, Taylor & Galaczi 2011). Since the implied context of such tests is beyond the language classroom, it follows that non-language professionals best represent those that will be involved in communicative acts in the real contexts of post-test language use. Given that the perspectives of non-language professionals are not taken into consideration in the selection and operationalization of assessment criteria, the counterclaim that current assessment criteria may not adequately represent the intended context of post-test context of language use is a potential risk. This risk, in turn, compromises the meaningfulness of test scores, which is broadly considered the foundation of test validation (Fulcher 2003: 117, American Educational Research Association 1999: 9). In this light, a two-part study was designed to investigate the judgments of non-language professionals in reaction to English non-native speech as the basis for a new approach to the validation of current GP speaking test assessment criteria. In contrast to previous studies that a) explore how non-language professionals use preexisting rating scales or preselected assessment criteria (e.g., Barnwell 1989, Hadden 1991, Bridgeman et al. 2011), or b) explore the reactions of non-language professionals to specific aspects of speech (e.g., Housen et al. 2012, Pinget et al. 2014), the present study attempts to first identify what non-language professionals actually attend to naturally without imposing criteria a priori, and then investigate the role of these aspects in the perception of speaking proficiency. The present study also expands on Sato 2014’s work on lay raters’ perspectives of communication ability, by comparing language professionals with non-language professionals, and by gathering quantitative data relevant to examining validation evidence for current GPT of speaking practices.|
|Organisational Unit:||Institute of Language Competence (ILC)|
|Publication type:||Conference other|
|Appears in Collections:||Publikationen Angewandte Linguistik|
Files in This Item:
There are no files associated with this item.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.