FlexKBQA: A Flexible LLM-Powered Framework for Few-Shot Knowledge Base Question Answering

Authors

  • Zhenyu Li Tsinghua University
  • Sunqi Fan Tsinghua University
  • Yu Gu The Ohio State University
  • Xiuxing Li University of Chinese Academy of Sciences Key Laboratory of Intelligent Information Processing Institute of Computing Technology, CAS
  • Zhichao Duan Tsinghua University
  • Bowen Dong Tsinghua University
  • Ning Liu Shandong University
  • Jianyong Wang Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v38i17.29823

Keywords:

NLP: Question Answering, NLP: (Large) Language Models, NLP: Sentence-level Semantics, Textual Inference, etc.

Abstract

Knowledge base question answering (KBQA) is a critical yet challenging task due to the vast number of entities within knowledge bases and the diversity of natural language questions posed by users. Unfortunately, the performance of most KBQA models tends to decline significantly in real-world scenarios where high-quality annotated data is insufficient. To mitigate the burden associated with manual annotation, we introduce FlexKBQA by utilizing Large Language Models (LLMs) as program translators for addressing the challenges inherent in the few-shot KBQA task. Specifically, FlexKBQA leverages automated algorithms to sample diverse programs, such as SPARQL queries, from the knowledge base, which are subsequently converted into natural language questions via LLMs. This synthetic dataset facilitates training a specialized lightweight model for the KB. Additionally, to reduce the barriers of distribution shift between synthetic data and real user questions, FlexKBQA introduces an executionguided self-training method to iterative leverage unlabeled user questions. Furthermore, we explore harnessing the inherent reasoning capability of LLMs to enhance the entire framework. Consequently, FlexKBQA delivers substantial flexibility, encompassing data annotation, deployment, and being domain agnostic. Through extensive experiments on GrailQA, WebQSP, and KQA Pro, we observe that under the few-shot even the more challenging zero-shot scenarios, FlexKBQA achieves impressive results with a few annotations, surpassing all previous baselines and even approaching the performance of supervised models, achieving a remarkable 93% performance relative to the fully-supervised models. We posit that FlexKBQA represents a significant advancement towards exploring better integration of large and lightweight models. Code is available at https://github.com/leezythu/FlexKBQA.

Downloads

Published

2024-03-24

How to Cite

Li, Z., Fan, S., Gu, Y., Li, X., Duan, Z., Dong, B., Liu, N., & Wang, J. (2024). FlexKBQA: A Flexible LLM-Powered Framework for Few-Shot Knowledge Base Question Answering. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 18608-18616. https://doi.org/10.1609/aaai.v38i17.29823

Issue

Section

AAAI Technical Track on Natural Language Processing II