Please use this identifier to cite or link to this item:
Publication type: Working paper – expertise – study
Title: Survey on evaluation methods for dialogue
et. al : No
Editors: Deriu, Jan Milan
Rodrigo, Alvaro
Otegi, Arantxa
Guillermo, Echegoyen
Rosset, Sophie
Agirre, Eneko
Cieliebak, Mark
DOI : 10.21256/zhaw-18985
Extent : 62
Issue Date: 10-May-2019
Publisher / Ed. Institution : ZHAW Zürcher Hochschule für Angewandte Wissenschaften
Other identifiers : arXiv: 1905.04071v1
Language : English
Subject (DDC) : 004: Computer science
Abstract: In this paper we survey the methods and concepts developed for the evaluation of dialogue systems. Evaluation is a crucial part during the development process. Often, dialogue systems are evaluated by means of human evaluations and questionnaires. However, this tends to be very cost and time intensive. Thus, much work has been put into finding methods, which allow to reduce the involvement of human labour. In this survey, we present the main concepts and methods. For this, we differentiate between the various classes of dialogue systems (task-oriented dialogue systems, conversational dialogue systems, and question-answering dialogue systems). We cover each class by introducing the main technologies developed for the dialogue systems and then by presenting the evaluation methods regarding this class.
License (according to publishing contract) : CC BY 4.0: Attribution 4.0 International
Departement: School of Engineering
Organisational Unit: Institute of Applied Information Technology (InIT)
Appears in Collections:Publikationen School of Engineering

Files in This Item:
File Description SizeFormat 
arxiv_1905.04071.pdfSurvey on Evaluation Methods for Dialogue Systems776.65 kBAdobe PDFThumbnail

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.