A Continuous Relaxation of Beam Search for End-to-End Training of Neural Sequence Models

Authors

  • Kartik Goyal Carnegie Mellon University, Language Technologies Institute
  • Graham Neubig Carnegie Mellon University, Language Technologies Institute
  • Chris Dyer DeepMind
  • Taylor Berg-Kirkpatrick Carnegie Mellon University, Language Technologies Institute

DOI:

https://doi.org/10.1609/aaai.v32i1.11806

Keywords:

Beam Search, Continuous Relaxation, Neural Sequence models, seq2seq models

Abstract

Beam search is a desirable choice of test-time decoding algorithm for neural sequence models because it potentially avoids search errors made by simpler greedy methods. However, typical cross entropy training procedures for these models do not directly consider the behaviour of the final decoding method. As a result, for cross-entropy trained models, beam decoding can sometimes yield reduced test performance when compared with greedy decoding. In order to train models that can more effectively make use of beam search, we propose a new training procedure that focuses on the final loss metric (e.g. Hamming loss) evaluated on the output of beam search. While well-defined, this "direct loss" objective is itself discontinuous and thus difficult to optimize. Hence, in our approach, we form a sub-differentiable surrogate objective by introducing a novel continuous approximation of the beam search decoding procedure.In experiments, we show that optimizing this new training objective yields substantially better results on two sequence tasks (Named Entity Recognition and CCG Supertagging) when compared with both cross entropy trained greedy decoding and cross entropy trained beam decoding baselines.

Downloads

Published

2018-04-29

How to Cite

Goyal, K., Neubig, G., Dyer, C., & Berg-Kirkpatrick, T. (2018). A Continuous Relaxation of Beam Search for End-to-End Training of Neural Sequence Models. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11806