Go Wider Instead of Deeper

Authors

  • Fuzhao Xue National University of Singapore
  • Ziji Shi National University of Singapore
  • Futao Wei National University of Singapore
  • Yuxuan Lou National University of Singapore
  • Yong Liu National University of Singapore
  • Yang You National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v36i8.20858

Keywords:

Machine Learning (ML)

Abstract

More transformer blocks with residual connections have recently achieved impressive results on various tasks. To achieve better performance with fewer trainable parameters, recent methods are proposed to go shallower by parameter sharing or model compressing along with the depth. However, weak modeling capacity limits their performance. Contrastively, going wider by inducing more trainable matrixes and parameters would produce a huge model requiring advanced parallelism to train and inference. In this paper, we propose a parameter-efficient framework, going wider instead of deeper. Specially, following existing works, we adapt parameter sharing to compress along depth. But, such deployment would limit the performance. To maximize modeling capacity, we scale along model width by replacing feed-forward network (FFN) with mixture-of-experts (MoE). Across transformer blocks, instead of sharing normalization layers, we propose to use individual layernorms to transform various semantic representations in a more parameter-efficient way. To evaluate our plug-and-run framework, we design WideNet and conduct comprehensive experiments on popular computer vision and natural language processing benchmarks. On ImageNet-1K, our best model outperforms Vision Transformer (ViT) by 1.5% with 0.72 times trainable parameters. Using 0.46 times and 0.13 times parameters, our WideNet can still surpass ViT and ViT-MoE by 0.8% and 2.1%, respectively. On four natural language processing datasets, WideNet outperforms ALBERT by 1.8% on average and surpass BERT using factorized embedding parameterization by 0.8% with fewer parameters.

Downloads

Published

2022-06-28

How to Cite

Xue, F., Shi, Z., Wei, F., Lou, Y., Liu, Y., & You, Y. (2022). Go Wider Instead of Deeper. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8779-8787. https://doi.org/10.1609/aaai.v36i8.20858

Issue

Section

AAAI Technical Track on Machine Learning III