Empower Sequence Labeling with Task-Aware Neural Language Model


  • Liyuan Liu University of Illinois at Urbana Champaign
  • Jingbo Shang University of Illinois at Urbana Champaign
  • Xiang Ren University of Southern California
  • Frank Xu Shanghai Jiao Tong University
  • Huan Gui Facebook
  • Jian Peng University of Illinois at Urbana Champaign
  • Jiawei Han University of Illinois at Urbana Champaign




Sequence Labeling, Lanuguage Model, Highway-networks


Linguistic sequence labeling is a general approach encompassing a variety of problems, such as part-of-speech tagging and named entity recognition. Recent advances in neural networks (NNs) make it possible to build reliable models without handcrafted features. However, in many cases, it is hard to obtain sufficient annotations to train these models. In this study, we develop a neural framework to extract knowledge from raw texts and empower the sequence labeling task. Besides word-level knowledge contained in pre-trained word embeddings, character-aware neural language models are incorporated to extract character-level knowledge. Transfer learning techniques are further adopted to mediate different components and guide the language model towards the key knowledge. Comparing to previous methods, these task-specific knowledge allows us to adopt a more concise model and conduct more efficient training. Different from most transfer learning methods, the proposed framework does not rely on any additional supervision. It extracts knowledge from self-contained order information of training sequences. Extensive experiments on benchmark datasets demonstrate the effectiveness of leveraging character-level knowledge and the efficiency of co-training. For example, on the CoNLL03 NER task, model training completes in about 6 hours on a single GPU, reaching F_1 score of 91.71+/-0.10 without using any extra annotations.




How to Cite

Liu, L., Shang, J., Ren, X., Xu, F., Gui, H., Peng, J., & Han, J. (2018). Empower Sequence Labeling with Task-Aware Neural Language Model. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12006