IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization

Authors

  • Wenxuan Zhou University of Southern California
  • Bill Yuchen Lin University of Southern California
  • Xiang Ren University of Southern California

DOI:

https://doi.org/10.1609/aaai.v35i16.17718

Keywords:

Learning & Optimization for SNLP

Abstract

Fine-tuning pre-trained language models (PTLMs), such as BERT and its better variant RoBERTa, has been a common practice for advancing performance in natural language understanding (NLU) tasks. Recent advance in representation learning shows that isotropic (i.e., unit-variance and uncorrelated) embeddings can significantly improve performance on downstream tasks with faster convergence and better generalization. The isotropy of the pre-trained embeddings in PTLMs, however, is relatively under-explored. In this paper, we analyze the isotropy of the pre-trained [CLS] embeddings of PTLMs with straightforward visualization, and point out two major issues: high variance in their standard deviation, and high correlation between different dimensions. We also propose a new network regularization method, isotropic batch normalization (IsoBN) to address the issues, towards learning more isotropic representations in fine-tuning by dynamically penalizing dominating principal components. This simple yet effective fine-tuning method yields about 1.0 absolute increment on the average of seven NLU tasks.

Downloads

Published

2021-05-18

How to Cite

Zhou, W., Lin, B. Y., & Ren, X. (2021). IsoBN: Fine-Tuning BERT with Isotropic Batch Normalization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14621-14629. https://doi.org/10.1609/aaai.v35i16.17718

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing III