A Multi-Task Learning Framework for Abstractive Text Summarization

Authors

  • Yao Lu University of Waterloo
  • Linqing Liu University of Waterloo
  • Zhile Jiang Sichuan University
  • Min Yang Chinese Academy of Sciences
  • Randy Goebel University of Alberta

DOI:

https://doi.org/10.1609/aaai.v33i01.33019987

Abstract

We propose a Multi-task learning approach for Abstractive Text Summarization (MATS), motivated by the fact that humans have no difficulty performing such task because they have the capabilities of multiple domains. Specifically, MATS consists of three components: (i) a text categorization model that learns rich category-specific text representations using a bi-LSTM encoder; (ii) a syntax labeling model that learns to improve the syntax-aware LSTM decoder; and (iii) an abstractive text summarization model that shares its encoder and decoder with the text categorization and the syntax labeling tasks, respectively. In particular, the abstractive text summarization model enjoys significant benefit from the additional text categorization and syntax knowledge. Our experimental results show that MATS outperforms the competitors.1

Downloads

Published

2019-07-17

How to Cite

Lu, Y., Liu, L., Jiang, Z., Yang, M., & Goebel, R. (2019). A Multi-Task Learning Framework for Abstractive Text Summarization. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9987-9988. https://doi.org/10.1609/aaai.v33i01.33019987

Issue

Section

Student Abstract Track