Dual Task Framework for Improving Persona-Grounded Dialogue Dataset

Authors

  • Minju Kim Yonsei University
  • Beong-woo Kwak Yonsei University
  • Youngwook Kim Yonsei University
  • Hong-in Lee Yonsei University
  • Seung-won Hwang Seoul National University
  • Jinyoung Yeo Yonsei University

DOI:

https://doi.org/10.1609/aaai.v36i10.21338

Keywords:

Speech & Natural Language Processing (SNLP)

Abstract

This paper introduces a simple yet effective data-centric approach for the task of improving persona-conditioned dialogue agents. Prior model-centric approaches unquestioningly depend on the raw crowdsourced benchmark datasets such as Persona-Chat. In contrast, we aim to fix annotation artifacts in benchmarking, which is orthogonally applicable to any dialogue model. Specifically, we augment relevant personas to improve dialogue dataset/agent, by leveraging the primal-dual structure of the two tasks, predicting dialogue responses and personas based on each other. Experiments on Persona-Chat show that our approach outperforms pre-trained LMs by an 11.7 point gain in terms of accuracy.

Downloads

Published

2022-06-28

How to Cite

Kim, M., Kwak, B.- woo, Kim, Y., Lee, H.- in, Hwang, S.- won, & Yeo, J. (2022). Dual Task Framework for Improving Persona-Grounded Dialogue Dataset. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10912-10920. https://doi.org/10.1609/aaai.v36i10.21338

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing