Zero-Shot Text-to-SQL Learning with Auxiliary Task

Authors

  • Shuaichen Chang The Ohio State University
  • Pengfei Liu Fudan University
  • Yun Tang JD.COM AI Research
  • Jing Huang JD.COM AI Research
  • Xiaodong He JD.COM AI Research
  • Bowen Zhou JD.COM AI Research

DOI:

https://doi.org/10.1609/aaai.v34i05.6246

Abstract

Recent years have seen great success in the use of neural seq2seq models on the text-to-SQL task. However, little work has paid attention to how these models generalize to realistic unseen data, which naturally raises a question: does this impressive performance signify a perfect generalization model, or are there still some limitations?

In this paper, we first diagnose the bottleneck of the text-to-SQL task by providing a new testbed, in which we observe that existing models present poor generalization ability on rarely-seen data. The above analysis encourages us to design a simple but effective auxiliary task, which serves as a supportive model as well as a regularization term to the generation task to increase the models' generalization. Experimentally, We evaluate our models on a large text-to-SQL dataset WikiSQL. Compared to a strong baseline coarse-to-fine model, our models improve over the baseline by more than 3% absolute in accuracy on the whole dataset. More interestingly, on a zero-shot subset test of WikiSQL, our models achieve 5% absolute accuracy gain over the baseline, clearly demonstrating its superior generalizability.

Downloads

Published

2020-04-03

How to Cite

Chang, S., Liu, P., Tang, Y., Huang, J., He, X., & Zhou, B. (2020). Zero-Shot Text-to-SQL Learning with Auxiliary Task. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7488-7495. https://doi.org/10.1609/aaai.v34i05.6246

Issue

Section

AAAI Technical Track: Natural Language Processing