Learning Bounded Context-Free-Grammar via LSTM and the Transformer: Difference and the Explanations

Authors

  • Hui Shi University of California, San Diego
  • Sicun Gao University of California, San Diego
  • Yuandong Tian Facebook
  • Xinyun Chen UC Berkeley
  • Jishen Zhao University of California, San Diego

DOI:

https://doi.org/10.1609/aaai.v36i8.20801

Keywords:

Machine Learning (ML), Speech & Natural Language Processing (SNLP)

Abstract

Long Short-Term Memory (LSTM) and Transformers are two popular neural architectures used for natural language processing tasks. Theoretical results show that both are Turing-complete and can represent any context-free language (CFL).In practice, it is often observed that Transformer models have better representation power than LSTM. But the reason is barely understood. We study such practical differences between LSTM and Transformer and propose an explanation based on their latent space decomposition patterns. To achieve this goal, we introduce an oracle training paradigm, which forces the decomposition of the latent representation of LSTMand the Transformer and supervises with the transitions of the Pushdown Automaton (PDA) of the corresponding CFL. With the forced decomposition, we show that the performance upper bounds of LSTM and Transformer in learning CFL are close: both of them can simulate a stack and perform stack operation along with state transitions. However, the absence of forced decomposition leads to the failure of LSTM models to capture the stack and stack operations, while having a marginal impact on the Transformer model. Lastly, we connect the experiment on the prototypical PDA to a real-world parsing task to re-verify the conclusions

Downloads

Published

2022-06-28

How to Cite

Shi, H., Gao, S., Tian, Y., Chen, X., & Zhao, J. (2022). Learning Bounded Context-Free-Grammar via LSTM and the Transformer: Difference and the Explanations. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8267-8276. https://doi.org/10.1609/aaai.v36i8.20801

Issue

Section

AAAI Technical Track on Machine Learning III