Playing Lottery Tickets with Vision and Language

Authors

  • Zhe Gan Microsoft
  • Yen-Chun Chen Microsoft
  • Linjie Li Microsoft
  • Tianlong Chen Unversity of Texas at Austin
  • Yu Cheng Microsoft Research
  • Shuohang Wang Microsoft
  • Jingjing Liu Tsinghua University
  • Lijuan Wang Microsoft
  • Zicheng Liu Microsoft

DOI:

https://doi.org/10.1609/aaai.v36i1.19945

Keywords:

Computer Vision (CV)

Abstract

Large-scale pre-training has recently revolutionized vision-and-language (VL) research. Models such as LXMERT and UNITER have significantly lifted the state of the art over a wide range of VL tasks. However, the large number of parameters in such models hinders their application in practice. In parallel, work on the lottery ticket hypothesis (LTH) has shown that deep neural networks contain small matching subnetworks that can achieve on par or even better performance than the dense networks when trained in isolation. In this work, we perform the first empirical study to assess whether such trainable subnetworks also exist in pre-trained VL models. We use UNITER as the main testbed (also test on LXMERT and ViLT), and consolidate 7 representative VL tasks for experiments, including visual question answering, visual commonsense reasoning, visual entailment, referring expression comprehension, image-text retrieval, GQA, and NLVR2. Through comprehensive analysis, we summarize our main findings as follows. (i) It is difficult to find subnetworks that strictly match the performance of the full model. However, we can find relaxed winning tickets at 50%-70% sparsity that maintain 99% of the full accuracy. (ii) Subnetworks found by task-specific pruning transfer reasonably well to the other tasks, while those found on the pre-training tasks at 60%/70% sparsity transfer universally, matching 98%/96% of the full accuracy on average over all the tasks. (iii) Besides UNITER, other models such as LXMERT and ViLT can also play lottery tickets. However, the highest sparsity we can achieve for ViLT is far lower than LXMERT and UNITER (30% vs. 70%). (iv) LTH also remains relevant when using other training methods (e.g., adversarial training).

Downloads

Published

2022-06-28

How to Cite

Gan, Z., Chen, Y.-C., Li, L., Chen, T., Cheng, Y., Wang, S., Liu, J., Wang, L., & Liu, Z. (2022). Playing Lottery Tickets with Vision and Language. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 652-660. https://doi.org/10.1609/aaai.v36i1.19945

Issue

Section

AAAI Technical Track on Computer Vision I