Importance-Aware Learning for Neural Headline Editing

Authors

  • Qingyang Wu University of California, Davis
  • Lei Li ByteDance
  • Hao Zhou ByteDance
  • Ying Zeng ByteDance
  • Zhou Yu University of California, Davis

DOI:

https://doi.org/10.1609/aaai.v34i05.6467

Abstract

Many social media news writers are not professionally trained. Therefore, social media platforms have to hire professional editors to adjust amateur headlines to attract more readers. We propose to automate this headline editing process through neural network models to provide more immediate writing support for these social media news writers. To train such a neural headline editing model, we collected a dataset which contains articles with original headlines and professionally edited headlines. However, it is expensive to collect a large number of professionally edited headlines. To solve this low-resource problem, we design an encoder-decoder model which leverages large scale pre-trained language models. We further improve the pre-trained model's quality by introducing a headline generation task as an intermediate task before the headline editing task. Also, we propose Self Importance-Aware (SIA) loss to address the different levels of editing in the dataset by down-weighting the importance of easily classified tokens and sentences. With the help of Pre-training, Adaptation, and SIA, the model learns to generate headlines in the professional editor's style. Experimental results show that our method significantly improves the quality of headline editing comparing against previous methods.

Downloads

Published

2020-04-03

How to Cite

Wu, Q., Li, L., Zhou, H., Zeng, Y., & Yu, Z. (2020). Importance-Aware Learning for Neural Headline Editing. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 9282-9289. https://doi.org/10.1609/aaai.v34i05.6467

Issue

Section

AAAI Technical Track: Natural Language Processing