Deep Semantic Role Labeling With Self-Attention

Authors

  • Zhixing Tan Xiamen University
  • Mingxuan Wang Tencent Technology
  • Jun Xie Tencent Technology
  • Yidong Chen Xiamen University
  • Xiaodong Shi Xiamen University

DOI:

https://doi.org/10.1609/aaai.v32i1.11928

Abstract

Semantic Role Labeling (SRL) is believed to be a crucial step towards natural language understanding and has been widely studied. Recent years, end-to-end SRL with recurrent neural networks (RNN) has gained increasing attention. However, it remains a major challenge for RNNs to handle structural information and long range dependencies. In this paper, we present a simple and effective architecture for SRL which aims to address these problems. Our model is based on self-attention which can directly capture the relationships between two tokens regardless of their distance. Our single model achieves F1=83.4 on the CoNLL-2005 shared task dataset and F1=82.7 on the CoNLL-2012 shared task dataset, which outperforms the previous state-of-the-art results by 1.8 and 1.0 F1 score respectively. Besides, our model is computationally efficient, and the parsing speed is 50K tokens per second on a single Titan X GPU.

Downloads

Published

2018-04-26

How to Cite

Tan, Z., Wang, M., Xie, J., Chen, Y., & Shi, X. (2018). Deep Semantic Role Labeling With Self-Attention. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11928

Issue

Section

Main Track: NLP and Knowledge Representation