A Dataset for Low-Resource Stylized Sequence-to-Sequence Generation
Low-resource stylized sequence-to-sequence (S2S) generation is in high demand. However, its development is hindered by the datasets which have limitations on scale and automatic evaluation methods. We construct two large-scale, multiple-reference datasets for low-resource stylized S2S, the Machine Translation Formality Corpus (MTFC) that is easy to evaluate and the Twitter Conversation Formality Corpus (TCFC) that tackles an important problem in chatbots. These datasets contain context to source style parallel data, source style to target parallel data, and non-parallel sentences in the target style to enable the semi-supervised learning. We provide three baselines, the pivot-based method, the teacher-student method, and the back-translation method. We find that the pivot-based method is the worst, and the other two methods achieve the best score on different metrics.