Filling Memory Gaps: Enhancing Continual Semantic Parsing via SQL Syntax Variance-Guided LLMs Without Real Data Replay

Authors

  • Ruiheng Liu Xi'an Research Institute of High-Tech Harbin Institute of Technology
  • Jinyu Zhang Harbin Institute of Technology
  • Yanqi Song Harbin Institute of Technology
  • Yu Zhang Harbin Institute of Technology
  • Bailong Yang Xi'an Research Institute of High-Tech

DOI:

https://doi.org/10.1609/aaai.v39i23.34644

Abstract

Continual Semantic Parsing (CSP) aims to train parsers to convert natural language questions into SQL across tasks with limited annotated examples, adapting to dynamically updated databases in real-world scenarios. Previous studies mitigate this challenge by replaying historical data or employing parameter-efficient tuning (PET), but they often violate data privacy or rely on ideal continual learning settings. To address these issues, we propose a new Large Language Model (LLM)-Enhanced Continuous Semantic Parsing method, named LECSP, which alleviates forgetting while encouraging generalization, without requiring real data replay or ideal settings. Specifically, it first analyzes the commonalities and differences between tasks from the SQL syntax perspective to guide LLMs in reconstructing key memories and improving memory accuracy through calibration. Then, it uses a task-aware dual-teacher distillation framework to promote the accumulation and transfer of knowledge during sequential training. Experimental results on two CSP benchmarks show that our method significantly outperforms existing methods, even those utilizing data replay or ideal settings. Additionally, we achieve generalization performance beyond upper limits, better adapting to unseen tasks.

Published

2025-04-11

How to Cite

Liu, R., Zhang, J., Song, Y., Zhang, Y., & Yang, B. (2025). Filling Memory Gaps: Enhancing Continual Semantic Parsing via SQL Syntax Variance-Guided LLMs Without Real Data Replay. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24641–24649. https://doi.org/10.1609/aaai.v39i23.34644

Issue

Section

AAAI Technical Track on Natural Language Processing II