Learning Greedy Policies for the Easy-First Framework

Authors

  • Jun Xie Oregon State University
  • Chao Ma Oregon State University
  • Janardhan Rao Doppa Washington State University
  • Prashanth Mannem Oregon State University
  • Xiaoli Fern Oregon State University
  • Thomas G. Dietterich Oregon State University
  • Prasad Tadepalli Oregon State University

DOI:

https://doi.org/10.1609/aaai.v29i1.9509

Keywords:

Structured Prediction, Learning for Search, Imitation Learning, Coreference Resolution

Abstract

Easy-first, a search-based structured prediction approach, has been applied to many NLP tasks including dependency parsing and coreference resolution. This approach employs a learned greedy policy (action scoring function) to make easy decisions first, which constrains the remaining decisions and makes them easier. We formulate greedy policy learning in the Easy-first approach as a novel non-convex optimization problem and solve it via an efficient Majorization Minimizatoin (MM) algorithm. Results on within-document coreference and cross-document joint entity and event coreference tasks demonstrate that the proposed approach achieves statistically significant performance improvement over existing training regimes for Easy-first and is less susceptible to overfitting.

Downloads

Published

2015-02-19

How to Cite

Xie, J., Ma, C., Doppa, J. R., Mannem, P., Fern, X., Dietterich, T. G., & Tadepalli, P. (2015). Learning Greedy Policies for the Easy-First Framework. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9509