Exploring Human-Like Reading Strategy for Abstractive Text Summarization


  • Min Yang Chinese Academy of Sciences
  • Qiang Qu Chinese Academy of Sciences
  • Wenting Tu Shanghai University of Finance and Economics
  • Ying Shen Peking University
  • Zhou Zhao Zhejiang University
  • Xiaojun Chen Shenzhen University




The recent artificial intelligence studies have witnessed great interest in abstractive text summarization. Although remarkable progress has been made by deep neural network based methods, generating plausible and high-quality abstractive summaries remains a challenging task. The human-like reading strategy is rarely explored in abstractive text summarization, which however is able to improve the effectiveness of the summarization by considering the process of reading comprehension and logical thinking. Motivated by the humanlike reading strategy that follows a hierarchical routine, we propose a novel Hybrid learning model for Abstractive Text Summarization (HATS). The model consists of three major components, a knowledge-based attention network, a multitask encoder-decoder network, and a generative adversarial network, which are consistent with the different stages of the human-like reading strategy. To verify the effectiveness of HATS, we conduct extensive experiments on two real-life datasets, CNN/Daily Mail and Gigaword datasets. The experimental results demonstrate that HATS achieves impressive results on both datasets.




How to Cite

Yang, M., Qu, Q., Tu, W., Shen, Y., Zhao, Z., & Chen, X. (2019). Exploring Human-Like Reading Strategy for Abstractive Text Summarization. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 7362-7369. https://doi.org/10.1609/aaai.v33i01.33017362



AAAI Technical Track: Natural Language Processing