|Publication type:||Conference other|
|Type of review:||Peer review (abstract)|
|Title:||The augmented interpreter : a pilot study on the use of augmented reality in interpreting|
|Authors:||Gieshoff, Anne Catherine|
|Conference details:||3rd HKBU International Conference on Interpreting : Interpreting and Technology, Hong-Kong (online), 7-9 December 2022|
|Subjects:||Augmented reality; Simultaneous interpreting; Terminology; Interview; Digital transformation|
|Subject (DDC):||418.02: Translating and interpreting|
|Abstract:||Simultaneous interpreters’ primary concern is providing a high-quality target language rendition that accurately reflects source language speech in content including subtle nuances, terminology or register. Managing visual input feeds into this cognitively demanding process with interpreters taking cues from speakers’ facial expressions and body language, interspeaker dynamics and audience response, as well as consulting any projected visuals, texts or presentations they have before them in the booth, and also glossaries. To date, interpreters have used on-screen, printed glossaries or specific applications (Jiang 2013), requiring them to interrupt their visual information processing to look up technical terms. This interruption may momentarily increase the interpreters' cognitive load. This is anecdotally evidenced by the fact that it is good practice for interpreters to help each other out in the booth by pointing to or writing down the term in question. A similar issue may also concern CAI-tools: first studies suggest that the question of how to display the output of a CAI-tool is not a trivial one (Defrancq and Fantinuoli 2021; Desmet, Vandierendonck, and Defrancq 2018). A solution to improve the interpreters' cognitive ergonomics when looking up terms may be augmented reality (AR) (see also Ziegler and Gigliobianco 2018). This technology allows a seamless integration of auditory and visual information by displaying the translations of technical terms on augmented reality glasses in direct view of the interpreters whenever they would usually consult their glossary. To address this question, we are in the process of conducting a pilot study with ten professional interpreters. They are asked to interpret two highly technical conference talks of 10 minutes each: the first one with a conventional glossary on a laptop and the second one with augmented reality glasses. In both conditions, interpreters receive a copy of the corresponding presentation slides that they can consult during the interpretation. In the AR condition, translations of terms are displayed on the glasses whenever the interpreter looks at a slide and the corresponding term or whenever the speaker pronounces the term. In the conventional glossary condition, interpreters receive a word list that contains the same terms as those that would be displayed on the AR glasses. The glossary is presented as a word document table on a laptop to mimic a realistic situation, because word files are still commonly used as a glossary in the booth. Interpreters' renditions are assessed with regard to general sense consistency and use of terminology, as well as disfluencies preceding the technical term in the rendition. After the interpreting task, interpreters fill out the NASA-Taskload Index (Hart and Staveland 1988) and the system usability scale (Brooke 1996) to evaluate their perceived cognitive load during both conditions and the usability of both presentation modes, AR and word glossary. In order to obtain further insights on how interpreters experienced both conditions, we conduct semi-structured interviews with the participants at the end of the session. In our presentation, we will report on preliminary results and discuss the use of AR in interpreting.|
|Fulltext version:||Published version|
|License (according to publishing contract):||Licence according to publishing contract|
|Organisational Unit:||Institute of Translation and Interpreting (IUED)|
|Published as part of the ZHAW project:||Der erweiterte Dolmetscher - eine Pilotstudie zum Nutzen erweiterter Realität beim Dolmetschen|
|Appears in collections:||Publikationen Angewandte Linguistik|
Files in This Item:
There are no files associated with this item.
Show full item record
Gieshoff, A. C., & Schuler, M. (2022). The augmented interpreter : a pilot study on the use of augmented reality in interpreting. 3rd HKBU International Conference on Interpreting : Interpreting and Technology, Hong-Kong (Online), 7-9 December 2022.
Gieshoff, A.C. and Schuler, M. (2022) ‘The augmented interpreter : a pilot study on the use of augmented reality in interpreting’, in 3rd HKBU International Conference on Interpreting : Interpreting and Technology, Hong-Kong (online), 7-9 December 2022.
A. C. Gieshoff and M. Schuler, “The augmented interpreter : a pilot study on the use of augmented reality in interpreting,” in 3rd HKBU International Conference on Interpreting : Interpreting and Technology, Hong-Kong (online), 7-9 December 2022, 2022.
Gieshoff, Anne Catherine, and Martin Schuler. “The Augmented Interpreter : A Pilot Study on the Use of Augmented Reality in Interpreting.” 3rd HKBU International Conference on Interpreting : Interpreting and Technology, Hong-Kong (Online), 7-9 December 2022, 2022.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.