Better than Random: Reliable NLG Human Evaluation with Constrained Active Sampling

Authors

  • Jie Ruan Peking University
  • Xiao Pu Peking University
  • Mingqi Gao Peking University
  • Xiaojun Wan Peking University
  • Yuesheng Zhu Peking University

DOI:

https://doi.org/10.1609/aaai.v38i17.29857

Keywords:

NLP: Generation, NLP: Interpretability, Analysis, and Evaluation of NLP Models

Abstract

Human evaluation is viewed as a reliable evaluation method for NLG which is expensive and time-consuming. To save labor and costs, researchers usually perform human evaluation on a small subset of data sampled from the whole dataset in practice. However, different selection subsets will lead to different rankings of the systems. To give a more correct inter-system ranking and make the gold standard human evaluation more reliable, we propose a Constrained Active Sampling Framework (CASF) for reliable human judgment. CASF operates through a Learner, a Systematic Sampler and a Constrained Controller to select representative samples for getting a more correct inter-system ranking. Experiment results on 137 real NLG evaluation setups with 44 human evaluation metrics across 16 datasets and 5 NLG tasks demonstrate CASF receives 93.18\% top-ranked system recognition accuracy and ranks first or ranks second on 90.91\% of the human metrics with 0.83 overall inter-system ranking Kendall correlation. Code and data are publicly available online.

Published

2024-03-24

How to Cite

Ruan, J., Pu, X., Gao, M., Wan, X., & Zhu, Y. (2024). Better than Random: Reliable NLG Human Evaluation with Constrained Active Sampling. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 18915-18923. https://doi.org/10.1609/aaai.v38i17.29857

Issue

Section

AAAI Technical Track on Natural Language Processing II