TY - JOUR AU - Al-Rfou, Rami AU - Choe, Dokook AU - Constant, Noah AU - Guo, Mandy AU - Jones, Llion PY - 2019/07/17 Y2 - 2024/03/28 TI - Character-Level Language Modeling with Deeper Self-Attention JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 33 IS - 01 SE - AAAI Technical Track: Machine Learning DO - 10.1609/aaai.v33i01.33013159 UR - https://ojs.aaai.org/index.php/AAAI/article/view/4182 SP - 3159-3166 AB - <p>LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model (Vaswani et al. 2017) with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.</p> ER -