TY - JOUR AU - Liu, Tianyu AU - Luo, Fuli AU - Xia, Qiaolin AU - Ma, Shuming AU - Chang, Baobao AU - Sui, Zhifang PY - 2019/07/17 Y2 - 2024/03/28 TI - Hierarchical Encoder with Auxiliary Supervision for Neural Table-to-Text Generation: Learning Better Representation for Tables JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 33 IS - 01 SE - AAAI Technical Track: Natural Language Processing DO - 10.1609/aaai.v33i01.33016786 UR - https://ojs.aaai.org/index.php/AAAI/article/view/4653 SP - 6786-6793 AB - <p>Generating natural language descriptions for the structured tables which consist of multiple <em>attribute-value</em> tuples is a convenient way to help people to understand the tables. Most neural table-to-text models are based on the encoder-decoder framework. However, it is hard for a vanilla encoder to learn the accurate semantic representation of a complex table. The challenges are two-fold: firstly, the table-to-text datasets often contain large number of attributes across different domains, thus it is hard for the encoder to incorporate these heterogeneous resources. Secondly, the single encoder also has difficulties in modeling the complex attribute-value structure of the tables. To this end, we first propose a two-level hierarchical encoder with coarse-to-fine attention to handle the attribute-value structure of the tables. Furthermore, to capture the accurate semantic representations of the tables, we propose 3 joint tasks apart from the prime encoder-decoder learning, namely <em>auxiliary sequence labeling task</em>, <em>text autoencoder</em> and <em>multi-labeling classification</em>, as the auxiliary supervisions for the table encoder. We test our models on the widely used dataset WIKIBIO which contains Wikipedia infoboxes and related descriptions. The dataset contains complex tables as well as large number of attributes across different domains. We achieve the state-of-the-art performance on both automatic and human evaluation metrics.</p> ER -