Online Embedding Compression for Text Classification Using Low Rank Matrix Factorization

Authors

  • Anish Acharya Amazon
  • Rahul Goel Amazon
  • Angeliki Metallinou Amazon
  • Inderjit Dhillon University of Texas at Austin

DOI:

https://doi.org/10.1609/aaai.v33i01.33016196

Abstract

Deep learning models have become state of the art for natural language processing (NLP) tasks, however deploying these models in production system poses significant memory constraints. Existing compression methods are either lossy or introduce significant latency. We propose a compression method that leverages low rank matrix factorization during training, to compress the word embedding layer which represents the size bottleneck for most NLP models. Our models are trained, compressed and then further re-trained on the downstream task to recover accuracy while maintaining the reduced size. Empirically, we show that the proposed method can achieve 90% compression with minimal impact in accuracy for sentence classification tasks, and outperforms alternative methods like fixed-point quantization or offline word embedding compression. We also analyze the inference time and storage space for our method through FLOP calculations, showing that we can compress DNN models by a configurable ratio and regain accuracy loss without introducing additional latency compared to fixed point quantization. Finally, we introduce a novel learning rate schedule, the Cyclically Annealed Learning Rate (CALR), which we empirically demonstrate to outperform other popular adaptive learning rate algorithms on a sentence classification benchmark.

Downloads

Published

2019-07-17

How to Cite

Acharya, A., Goel, R., Metallinou, A., & Dhillon, I. (2019). Online Embedding Compression for Text Classification Using Low Rank Matrix Factorization. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6196-6203. https://doi.org/10.1609/aaai.v33i01.33016196

Issue

Section

AAAI Technical Track: Natural Language Processing