PEAK: Pyramid Evaluation via Automated Knowledge Extraction

Authors

  • Qian Yang Tsinghua University
  • Rebecca Passonneau Columbia University
  • Gerard de Melo Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v30i1.10336

Keywords:

automatic summarization, summarization evaluation, education

Abstract

Evaluating the selection of content in a summary is important both for human-written summaries, which can be a useful pedagogical tool for reading and writing skills, and machine-generated summaries, which are increasingly being deployed in information management. The pyramid method assesses a summary by aggregating content units from the summaries of a wise crowd (a form of crowdsourcing). It has proven highly reliable but has largely depended on manual annotation. We propose PEAK, the first method to automatically assess summary content using the pyramid method that also generates the pyramid content models. PEAK relies on open information extraction and graph algorithms. The resulting scores correlate well with manually derived pyramid scores on both human and machine summaries, opening up the possibility of wide-spread use in numerous applications.

Downloads

Published

2016-03-05

How to Cite

Yang, Q., Passonneau, R., & de Melo, G. (2016). PEAK: Pyramid Evaluation via Automated Knowledge Extraction. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10336

Issue

Section

Technical Papers: NLP and Knowledge Representation