MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation

Authors

  • Chen Zhang National University of Singapore, Singapore Robert Bosch (SEA), Singapore
  • Luis Fernando D'Haro Universidad Politécnica de Madrid, Spain
  • Thomas Friedrichs Robert Bosch (SEA) Pte Ltd, Singapore
  • Haizhou Li National University of Singapore, Singapore Kriston AI Lab, China The Chinese University of Hong Kong (Shenzhen), China

DOI:

https://doi.org/10.1609/aaai.v36i10.21420

Keywords:

Speech & Natural Language Processing (SNLP)

Abstract

Chatbots are designed to carry out human-like conversations across different domains, such as general chit-chat, knowledge exchange, and persona-grounded conversations. To measure the quality of such conversational agents, a dialogue evaluator is expected to conduct assessment across domains as well. However, most of the state-of-the-art automatic dialogue evaluation metrics (ADMs) are not designed for multi-domain evaluation. We are motivated to design a general and robust framework, MDD-Eval, to address the problem. Specifically, we first train a teacher evaluator with human-annotated data to acquire a rating skill to tell good dialogue responses from bad ones in a particular domain and then, adopt a self-training strategy to train a new evaluator with teacher-annotated multi-domain data, that helps the new evaluator to generalize across multiple domains. MDD-Eval is extensively assessed on six dialogue evaluation benchmarks. Empirical results show that the MDD-Eval framework achieves a strong performance with an absolute improvement of 7% over the state-of-the-art ADMs in terms of mean Spearman correlation scores across all the evaluation benchmarks.

Downloads

Published

2022-06-28

How to Cite

Zhang, C., D’Haro, L. F., Friedrichs, T., & Li, H. (2022). MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 11657-11666. https://doi.org/10.1609/aaai.v36i10.21420

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing