Neural Models for Sequence Chunking

Authors

  • Feifei Zhai IBM Watson
  • Saloni Potdar IBM Watson
  • Bing Xiang IBM Watson
  • Bowen Zhou IBM Watson

DOI:

https://doi.org/10.1609/aaai.v31i1.10995

Abstract

Many natural language understanding (NLU) tasks, such as shallow parsing (i.e., text chunking) and semantic slot filling, require the assignment of representative labels to the meaningful chunks in a sentence. Most of the current deep neural network (DNN) based methods consider these tasks as a sequence labeling problem, in which a word, rather than a chunk, is treated as the basic unit for labeling. These chunks are then inferred by the standard IOB (Inside-Outside- Beginning) labels. In this paper, we propose an alternative approach by investigating the use of DNN for sequence chunking, and propose three neural models so that each chunk can be treated as a complete unit for labeling. Experimental results show that the proposed neural sequence chunking models can achieve start-of-the-art performance on both the text chunking and slot filling tasks.

Downloads

Published

2017-02-12

How to Cite

Zhai, F., Potdar, S., Xiang, B., & Zhou, B. (2017). Neural Models for Sequence Chunking. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10995