STAR: Boosting Low-Resource Information Extraction by Structure-to-Text Data Generation with Large Language Models

Authors

  • Mingyu Derek Ma UCLA
  • Xiaoxuan Wang UCLA
  • Po-Nien Kung UCLA
  • P. Jeffrey Brantingham UCLA
  • Nanyun Peng UCLA
  • Wei Wang UCLA

DOI:

https://doi.org/10.1609/aaai.v38i17.29839

Keywords:

NLP: (Large) Language Models, NLP: Information Extraction, NLP: Applications

Abstract

Information extraction tasks such as event extraction require an in-depth understanding of the output structure and sub-task dependencies. They heavily rely on task-specific training data in the form of (passage, target structure) pairs to obtain reasonable performance. However, obtaining such data through human annotation is costly, leading to a pressing need for low-resource information extraction approaches that require minimal human labeling for real-world applications. Fine-tuning supervised models with synthesized training data would be a generalizable method, but the existing data generation methods either still rely on large-scale ground-truth data or cannot be applied to complicated IE tasks due to their poor performance. To address these challenges, we propose STAR, a data generation method that leverages Large Language Models (LLMs) to synthesize data instances given limited seed demonstrations, thereby boosting low-resource information extraction performance. Our approach involves generating target structures (Y) followed by generating passages (X), all accomplished with the aid of LLMs. We design fine-grained step-by-step instructions to obtain the initial data instances. We further reduce errors and improve data quality through self-reflection error identification and self-refinement with iterative revision. Our experiments show that the data generated by STAR significantly improve the performance of low-resource event extraction and relation extraction tasks, even surpassing the effectiveness of human-curated data. Human assessment of the data quality shows STAR-generated data exhibit higher passage quality and better align with the task definitions compared with the human-curated data.

Downloads

Published

2024-03-24

How to Cite

Ma, M. D., Wang, X., Kung, P.-N., Brantingham, P. J., Peng, N., & Wang, W. (2024). STAR: Boosting Low-Resource Information Extraction by Structure-to-Text Data Generation with Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 18751-18759. https://doi.org/10.1609/aaai.v38i17.29839

Issue

Section

AAAI Technical Track on Natural Language Processing II