Perception Score: A Learned Metric for Open-ended Text Generation Evaluation

Authors

  • Jing Gu University of California, Davis
  • Qingyang Wu University of California, Davis
  • Zhou Yu Columbia University

DOI:

https://doi.org/10.1609/aaai.v35i14.17526

Keywords:

Generation, General, Applications

Abstract

Automatic evaluation for open-ended natural language generation tasks remains a challenge. We propose a learned evaluation metric: Perception Score. It utilizes a pre-trained model and considers context information for conditional generation. Perception Score assigns a holistic score along with the uncertainty measurement. We conduct experiments on three open-ended conditional generation tasks and two open-ended unconditional generation tasks. Perception Score achieves state-of-the-art results on all the tasks consistently in terms of correlation with human evaluation scores.

Downloads

Published

2021-05-18

How to Cite

Gu, J., Wu, Q., & Yu, Z. (2021). Perception Score: A Learned Metric for Open-ended Text Generation Evaluation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(14), 12902-12910. https://doi.org/10.1609/aaai.v35i14.17526

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing I