Language Models of Code Are Few-Shot Planners and Reasoners for Multi-Document Summarization with Attribution
DOI:
https://doi.org/10.1609/aaai.v39i23.34676Abstract
Document summarization has greatly benefited from advances in large language models (LLMs). In real-world situations, summaries often need to be generated from multiple documents with diverse sources and authors, lacking a clear information flow. Naively concatenating these documents and generating a summary can lead to poorly structured narratives and redundancy. Additionally, attributing each part of the generated summary to a specific source is crucial for reliability. In this study, we address multi-document summarization with attribution using our proposed solution ***MiDAS-PRo***, consisting of three stages: (i) Planning the hierarchical organization of source documents, (ii) Reasoning by generating relevant entities/topics, and (iii) Summary Generation. We treat the first two sub-problems as a code completion task for LLMs. By incorporating well-selected in-context learning examples through a graph attention network, LLMs effectively generate plans and reason topics for a document collection. Experiments on summarizing scientific articles from public datasets show that our approach outperforms state-of-the-art baselines in both automated and human evaluations.Downloads
Published
2025-04-11
How to Cite
Nandy, A., & Bandyopadhyay, S. (2025). Language Models of Code Are Few-Shot Planners and Reasoners for Multi-Document Summarization with Attribution. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24930–24938. https://doi.org/10.1609/aaai.v39i23.34676
Issue
Section
AAAI Technical Track on Natural Language Processing II