Exploring Explainable Selection to Control Abstractive Summarization

Authors

  • Haonan Wang School of Computer Science and Technology, Beijing Institute of Technology
  • Yang Gao School of Computer Science and Technology, Beijing Institute of Technology
  • Yu Bai School of Computer Science and Technology, Beijing Institute of Technology
  • Mirella Lapata Institute for Language, Cognition and Computation, School of Informatics, University of Edinburgh
  • Heyan Huang Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications School of Computer Science and Technology, Beijing Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v35i15.17641

Keywords:

Summarization, Generation, Interpretaility & Analysis of NLP Models

Abstract

Like humans, document summarization models can interpret a document’s contents in a number of ways. Unfortunately, the neural models of today are largely black boxes that provide little explanation of how or why they generated a summary in the way they did. Therefore, to begin prying open the black box and to inject a level of control into the substance of the final summary, we developed a novel select-and-generate framework that focuses on explainability. By revealing the latent centrality and interactions between sentences, along with scores for novelty and relevance, users are given a window into the choices a model is making and an opportunity to guide those choices in a more desirable direction. A novel pair-wise matrix captures the sentence interactions, centrality and attribute scores, and a mask with tunable attribute thresholds allows the user to control which sentences are likely to be included in the extraction. A sentence-deployed attention mechanism in the abstractor ensures the final summary emphasizes the desired content. Additionally, the encoder is adaptable, supporting both Transformer- and BERT-based configurations. In a series of experiments assessed with ROUGE metrics and two human evaluations, ESCA outperformed eight state-of-the-art models on the CNN/DailyMail and NYT50 benchmark datasets.

Downloads

Published

2021-05-18

How to Cite

Wang, H., Gao, Y., Bai, Y., Lapata, M., & Huang, H. (2021). Exploring Explainable Selection to Control Abstractive Summarization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13933-13941. https://doi.org/10.1609/aaai.v35i15.17641

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II