Bitte benutzen Sie diese Kennung, um auf die Ressource zu verweisen: https://doi.org/10.21256/zhaw-29064
Publikationstyp: Konferenz: Paper
Art der Begutachtung: Peer review (Publikation)
Titel: Missing information, unresponsive authors, experimental flaws : the impossibility of assessing the reproducibility of previous human evaluations in NLP
Autor/-in: Belz, Anya
Thomson, Craig
Reiter, Ehud
Cieliebak, Mark
Hürlimann, Manuela
et. al: Yes
DOI: 10.18653/v1/2023.insights-1.1
10.21256/zhaw-29064
Tagungsband: The Fourth Workshop on Insights from Negative Results in NLP
Seite(n): 1
Seiten bis: 10
Angaben zur Konferenz: The Fourth Workshop on Insights from Negative Results in NLP, Dubrovnik, Croatia, 2-6 May 2023
Erscheinungsdatum: 2023
Verlag / Hrsg. Institution: Association for Computational Linguistics
Sprache: Englisch
Schlagwörter: Reproducibility; System evaluation; Natural language processing
Fachgebiet (DDC): 410.285: Computerlinguistik
Zusammenfassung: We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.
URI: https://digitalcollection.zhaw.ch/handle/11475/29064
Volltext Version: Publizierte Version
Lizenz (gemäss Verlagsvertrag): CC BY 4.0: Namensnennung 4.0 International
Departement: School of Engineering
Organisationseinheit: Centre for Artificial Intelligence (CAI)
Enthalten in den Sammlungen:Publikationen School of Engineering

Dateien zu dieser Ressource:
Datei Beschreibung GrößeFormat 
2023_Belz-etal_Reproducibility-of-previous-human-evaluations-in-NLP.pdf145.07 kBAdobe PDFMiniaturbild
Öffnen/Anzeigen
Zur Langanzeige
Belz, A., Thomson, C., Reiter, E., Cieliebak, M., & Hürlimann, M. (2023). Missing information, unresponsive authors, experimental flaws : the impossibility of assessing the reproducibility of previous human evaluations in NLP [Conference paper]. The Fourth Workshop on Insights from Negative Results in NLP, 1–10. https://doi.org/10.18653/v1/2023.insights-1.1
Belz, A. et al. (2023) ‘Missing information, unresponsive authors, experimental flaws : the impossibility of assessing the reproducibility of previous human evaluations in NLP’, in The Fourth Workshop on Insights from Negative Results in NLP. Association for Computational Linguistics, pp. 1–10. Available at: https://doi.org/10.18653/v1/2023.insights-1.1.
A. Belz, C. Thomson, E. Reiter, M. Cieliebak, and M. Hürlimann, “Missing information, unresponsive authors, experimental flaws : the impossibility of assessing the reproducibility of previous human evaluations in NLP,” in The Fourth Workshop on Insights from Negative Results in NLP, 2023, pp. 1–10. doi: 10.18653/v1/2023.insights-1.1.
BELZ, Anya, Craig THOMSON, Ehud REITER, Mark CIELIEBAK und Manuela HÜRLIMANN, 2023. Missing information, unresponsive authors, experimental flaws : the impossibility of assessing the reproducibility of previous human evaluations in NLP. In: The Fourth Workshop on Insights from Negative Results in NLP. Conference paper. Association for Computational Linguistics. 2023. S. 1–10
Belz, Anya, Craig Thomson, Ehud Reiter, Mark Cieliebak, and Manuela Hürlimann. 2023. “Missing Information, Unresponsive Authors, Experimental Flaws : The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP.” Conference paper. In The Fourth Workshop on Insights from Negative Results in NLP, 1–10. Association for Computational Linguistics. https://doi.org/10.18653/v1/2023.insights-1.1.
Belz, Anya, et al. “Missing Information, Unresponsive Authors, Experimental Flaws : The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP.” The Fourth Workshop on Insights from Negative Results in NLP, Association for Computational Linguistics, 2023, pp. 1–10, https://doi.org/10.18653/v1/2023.insights-1.1.


Alle Ressourcen in diesem Repository sind urheberrechtlich geschützt, soweit nicht anderweitig angezeigt.