Incorporating Constituent Syntax for Coreference Resolution


  • Fan Jiang University of Melbourne
  • Trevor Cohn University of Melbourne



Speech & Natural Language Processing (SNLP)


Syntax has been shown to benefit Coreference Resolution from incorporating long-range dependencies and structured information captured by syntax trees, either in traditional statistical machine learning based systems or recently proposed neural models. However, most leading systems use only dependency trees. We argue that constituent trees also encode important information, such as explicit span-boundary signals captured by nested multi-word phrases, extra linguistic labels and hierarchical structures useful for detecting anaphora. In this work, we propose a simple yet effective graph-based method to incorporate constituent syntactic structures. Moreover, we also explore to utilise higher-order neighbourhood information to encode rich structures in constituent trees. A novel message propagation mechanism is therefore proposed to enable information flow among elements in syntax trees. Experiments on the English and Chinese portions of OntoNotes 5.0 benchmark show that our proposed model either beats a strong baseline or achieves new state-of-the-art performance. Code is available at




How to Cite

Jiang, F., & Cohn, T. (2022). Incorporating Constituent Syntax for Coreference Resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 10831-10839.



AAAI Technical Track on Speech and Natural Language Processing