Please use this identifier to cite or link to this item:
https://doi.org/10.21256/zhaw-26130
Publication type: | Conference paper |
Type of review: | Peer review (publication) |
Title: | Probing the robustness of trained metrics for conversational dialogue systems |
Authors: | Deriu, Jan Milan Tuggener, Don von Däniken, Pius Cieliebak, Mark |
et. al: | No |
DOI: | 10.18653/v1/2022.acl-short.85 10.21256/zhaw-26130 |
Proceedings: | Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics |
Volume(Issue): | 2 |
Page(s): | 750 |
Pages to: | 761 |
Conference details: | 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022), Dublin, Ireland, 22-27 May 2022 |
Issue Date: | May-2022 |
Publisher / Ed. Institution: | Association for Computational Linguistics |
Language: | English |
Subjects: | Evaluation; Diaglogue system |
Subject (DDC): | 410.285: Computational linguistics |
Abstract: | This paper introduces an adversarial method to stress-test trained metrics for the evaluation of conversational dialogue systems. The method leverages Reinforcement Learning to find response strategies that elicit optimal scores from the trained metrics. We apply our method to test recently proposed trained metrics. We find that they all are susceptible to giving high scores to responses generated by rather simple and obviously flawed strategies that our method converges on. For instance, simply copying parts of the conversation context to form a response yields competitive scores or even outperforms responses written by humans. |
URI: | https://digitalcollection.zhaw.ch/handle/11475/26130 |
Fulltext version: | Published version |
License (according to publishing contract): | CC BY 4.0: Attribution 4.0 International |
Departement: | School of Engineering |
Organisational Unit: | Centre for Artificial Intelligence (CAI) |
Appears in collections: | Publikationen School of Engineering |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
2022_Deriu-etal_Robustness-probing-of-trained-metrics.pdf | 307.81 kB | Adobe PDF | ![]() View/Open |
Show full item record
Deriu, J. M., Tuggener, D., von Däniken, P., & Cieliebak, M. (2022). Probing the robustness of trained metrics for conversational dialogue systems [Conference paper]. Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2, 750–761. https://doi.org/10.18653/v1/2022.acl-short.85
Deriu, J.M. et al. (2022) ‘Probing the robustness of trained metrics for conversational dialogue systems’, in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pp. 750–761. Available at: https://doi.org/10.18653/v1/2022.acl-short.85.
J. M. Deriu, D. Tuggener, P. von Däniken, and M. Cieliebak, “Probing the robustness of trained metrics for conversational dialogue systems,” in Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, May 2022, vol. 2, pp. 750–761. doi: 10.18653/v1/2022.acl-short.85.
DERIU, Jan Milan, Don TUGGENER, Pius VON DÄNIKEN und Mark CIELIEBAK, 2022. Probing the robustness of trained metrics for conversational dialogue systems. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. Conference paper. Association for Computational Linguistics. Mai 2022. S. 750–761
Deriu, Jan Milan, Don Tuggener, Pius von Däniken, and Mark Cieliebak. 2022. “Probing the Robustness of Trained Metrics for Conversational Dialogue Systems.” Conference paper. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, 2:750–61. Association for Computational Linguistics. https://doi.org/10.18653/v1/2022.acl-short.85.
Deriu, Jan Milan, et al. “Probing the Robustness of Trained Metrics for Conversational Dialogue Systems.” Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, vol. 2, Association for Computational Linguistics, 2022, pp. 750–61, https://doi.org/10.18653/v1/2022.acl-short.85.
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.