HeGTa: Leveraging Heterogeneous Graph-enhanced Large Language Models for Few-shot Complex Table Understanding
DOI:
https://doi.org/10.1609/aaai.v39i23.34606Abstract
Table Understanding (TU) has achieved promising advancements, but it faces the challenges of the scarcity of manually labeled tables and the presence of complex table structures. To address these challenges, we propose HeGTa, a heterogeneous graph (HG)-enhanced large language model (LLM) designed for few-shot TU tasks. This framework aligns structural table semantics with the LLM's parametric knowledge through soft prompts and instruction tuning. It also addresses complex tables with a multi-task pre-training scheme, incorporating three novel multi-granularity self-supervised HG pre-text tasks. We empirically demonstrate the effectiveness of HeGTa, showing that it outperforms the SOTA for few-shot complex TU on several benchmarks.Downloads
Published
2025-04-11
How to Cite
Jin, R., Li, Y., Qi, G., Hu, N., Li, Y.-F., Chen, J., … Bi, S. (2025). HeGTa: Leveraging Heterogeneous Graph-enhanced Large Language Models for Few-shot Complex Table Understanding. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24294–24302. https://doi.org/10.1609/aaai.v39i23.34606
Issue
Section
AAAI Technical Track on Natural Language Processing II