Revisiting Iterative Back-Translation from the Perspective of Compositional Generalization

Authors

  • Yinuo Guo Peking University
  • Hualei Zhu Beihang University
  • Zeqi Lin Microsoft Research Asia
  • Bei Chen Microsoft Research Asia
  • Jian-Guang Lou Microsoft Research Asia
  • Dongmei Zhang Microsoft Research Asia

Keywords:

Semi-Supervised Learning

Abstract

Human intelligence exhibits compositional generalization (i.e., the capacity to understand and produce unseen combinations of seen components), but current neural seq2seq models lack such ability. In this paper, we revisit iterative back-translation, a simple yet effective semi-supervised method, to investigate whether and how it can improve compositional generalization. In this work: (1) We first empirically show that iterative back-translation substantially improves the performance on compositional generalization benchmarks (CFQ and SCAN). (2) To understand why iterative back-translation is useful, we carefully examine the performance gains and find that iterative back-translation can increasingly correct errors in pseudo-parallel data. (3) To further encourage this mechanism, we propose curriculum iterative back-translation, which better improves the quality of pseudo-parallel data, thus further improving the performance.

Downloads

Published

2021-05-18

How to Cite

Guo, Y., Zhu, H., Lin, Z., Chen, B., Lou, J.-G., & Zhang, D. (2021). Revisiting Iterative Back-Translation from the Perspective of Compositional Generalization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7601-7609. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16930

Issue

Section

AAAI Technical Track on Machine Learning II