A Semantic QA-Based Approach for Text Summarization Evaluation

Authors

  • Ping Chen University of Massachusetts Boston
  • Fei Wu University of Massachusetts Boston
  • Tong Wang University of Massachusetts Boston
  • Wei Ding University of Massachusetts Boston

DOI:

https://doi.org/10.1609/aaai.v32i1.11911

Keywords:

text summarization evaluation, question answering

Abstract

Many Natural Language Processing and Computational Linguistics applications involve the generation of new texts based on some existing texts, such as summarization, text simplification and machine translation. However, there has been a serious problem haunting these applications for decades, that is, how to automatically and accurately assess quality of these applications. In this paper, we will present some preliminary results on one especially useful and challenging problem in NLP system evaluation---how to pinpoint content differences of two text passages (especially for large passages such as articles and books). Our idea is intuitive and very different from existing approaches. We treat one text passage as a small knowledge base, and ask it a large number of questions to exhaustively identify all content points in it. By comparing the correctly answered questions from two text passages, we will be able to compare their content precisely. The experiment using 2007 DUC summarization corpus clearly shows promising results.

Downloads

Published

2018-04-26

How to Cite

Chen, P., Wu, F., Wang, T., & Ding, W. (2018). A Semantic QA-Based Approach for Text Summarization Evaluation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11911

Issue

Section

Main Track: NLP and Knowledge Representation