Data Augmentation for Abstractive Query-Focused Multi-Document Summarization

Authors

  • Ramakanth Pasunuru University of North Carolina at Chapel Hill
  • Asli Celikyilmaz Microsoft Research
  • Michel Galley Microsoft Research
  • Chenyan Xiong Microsoft Research
  • Yizhe Zhang Microsoft Research
  • Mohit Bansal University of North Carolina at Chapel Hill
  • Jianfeng Gao Microsoft Research

DOI:

https://doi.org/10.1609/aaai.v35i15.17611

Keywords:

Summarization, Generation

Abstract

The progress in Query-focused Multi-Document Summarization (QMDS) has been limited by the lack of sufficient largescale high-quality training datasets. We present two QMDS training datasets, which we construct using two data augmentation methods: (1) transferring the commonly used single-document CNN/Daily Mail summarization dataset to create the QMDSCNN dataset, and (2) mining search-query logs to create the QMDSIR dataset. These two datasets have complementary properties, i.e., QMDSCNN has real summaries but queries are simulated, while QMDSIR has real queries but simulated summaries. To cover both these real summary and query aspects, we build abstractive end-to-end neural network models on the combined datasets that yield new state-of-the-art transfer results on DUC datasets. We also introduce new hierarchical encoders that enable a more efficient encoding of the query together with multiple documents. Empirical results demonstrate that our data augmentation and encoding methods outperform baseline models on automatic metrics, as well as on human evaluations along multiple attributes.

Downloads

Published

2021-05-18

How to Cite

Pasunuru, R., Celikyilmaz, A., Galley, M., Xiong, C., Zhang, Y., Bansal, M., & Gao, J. (2021). Data Augmentation for Abstractive Query-Focused Multi-Document Summarization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13666-13674. https://doi.org/10.1609/aaai.v35i15.17611

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II