Graphix-T5: Mixing Pre-trained Transformers with Graph-Aware Layers for Text-to-SQL Parsing

Authors

  • Jinyang Li The University of Hong Kong DAMO Academy, Alibaba Group
  • Binyuan Hui DAMO Academy, Alibaba Group
  • Reynold Cheng The University of Hong Kong Guangdong–Hong Kong-Macau Joint Laboratory
  • Bowen Qin Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences
  • Chenhao Ma The Chinese University of Hong Kong (Shenzhen)
  • Nan Huo The University of Hong Kong
  • Fei Huang DAMO Academy, Alibaba Group
  • Wenyu Du The University of Hong Kong
  • Luo Si DAMO Academy, Alibaba Group
  • Yongbin Li DAMO Academy, Alibaba Group

DOI:

https://doi.org/10.1609/aaai.v37i11.26536

Keywords:

SNLP: Lexical & Frame Semantics, Semantic Parsing

Abstract

The task of text-to-SQL parsing, which aims at converting natural language questions into executable SQL queries, has garnered increasing attention in recent years. One of the major challenges in text-to-SQL parsing is domain generalization, i.e., how to generalize well to unseen databases. Recently, the pre-trained text-to-text transformer model, namely T5, though not specialized for text-to-SQL parsing, has achieved state-of-the-art performance on standard benchmarks targeting domain generalization. In this work, we explore ways to further augment the pre-trained T5 model with specialized components for text-to-SQL parsing. Such components are expected to introduce structural inductive bias into text-to-SQL parsers thus improving the model’s capacity on (potentially multi-hop) reasoning, which is critical for generating structure-rich SQLs. To this end, we propose a new architecture GRAPHIX-T5, a mixed model with the standard pre-trained transformer model augmented by specially-designed graph-aware layers. Extensive experiments and analysis demonstrate the effectiveness of GRAPHIX-T5 across four text-to-SQL benchmarks: SPIDER, SYN, REALISTIC and DK. GRAPHIX-T5 surpasses all other T5-based parsers with a significant margin, achieving new state-of-the-art performance. Notably, GRAPHIX-T5-large reaches performance superior to the original T5-large by 5.7% on exact match (EM) accuracy and 6.6% on execution accuracy (EX). This even outperforms the T5-3B by 1.2% on EM and 1.5% on EX

Downloads

Published

2023-06-26

How to Cite

Li, J., Hui, B., Cheng, R., Qin, B., Ma, C., Huo, N., Huang, F., Du, W., Si, L., & Li, Y. (2023). Graphix-T5: Mixing Pre-trained Transformers with Graph-Aware Layers for Text-to-SQL Parsing. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13076-13084. https://doi.org/10.1609/aaai.v37i11.26536

Issue

Section

AAAI Technical Track on Speech & Natural Language Processing