Incorporating Constituent Syntax for Coreference Resolution
Keywords:Speech & Natural Language Processing (SNLP)
AbstractSyntax has been shown to benefit Coreference Resolution from incorporating long-range dependencies and structured information captured by syntax trees, either in traditional statistical machine learning based systems or recently proposed neural models. However, most leading systems use only dependency trees. We argue that constituent trees also encode important information, such as explicit span-boundary signals captured by nested multi-word phrases, extra linguistic labels and hierarchical structures useful for detecting anaphora. In this work, we propose a simple yet effective graph-based method to incorporate constituent syntactic structures. Moreover, we also explore to utilise higher-order neighbourhood information to encode rich structures in constituent trees. A novel message propagation mechanism is therefore proposed to enable information flow among elements in syntax trees. Experiments on the English and Chinese portions of OntoNotes 5.0 benchmark show that our proposed model either beats a strong baseline or achieves new state-of-the-art performance. Code is available at https://github.com/Fantabulous-J/Coref-Constituent-Graph.
How to Cite
Jiang, F., & Cohn, T. (2022). Incorporating Constituent Syntax for Coreference Resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10831-10839. https://doi.org/10.1609/aaai.v36i10.21329
AAAI Technical Track on Speech and Natural Language Processing