Learning Compact Features via In-Training Representation Alignment

Authors

  • Xin Li Wayne State University
  • Xiangrui Li Wayne State University
  • Deng Pan Wayne State University
  • Yao Qiang Wayne State University
  • Dongxiao Zhu Wayne State Unversity

DOI:

https://doi.org/10.1609/aaai.v37i7.26044

Keywords:

ML: Representation Learning, ML: Learning Theory

Abstract

Deep neural networks (DNNs) for supervised learning can be viewed as a pipeline of the feature extractor (i.e., last hidden layer) and a linear classifier (i.e., output layer) that are trained jointly with stochastic gradient descent (SGD) on the loss function (e.g., cross-entropy). In each epoch, the true gradient of the loss function is estimated using a mini-batch sampled from the training set and model parameters are then updated with the mini-batch gradients. Although the latter provides an unbiased estimation of the former, they are subject to substantial variances derived from the size and number of sampled mini-batches, leading to noisy and jumpy updates. To stabilize such undesirable variance in estimating the true gradients, we propose In-Training Representation Alignment (ITRA) that explicitly aligns feature distributions of two different mini-batches with a matching loss in the SGD training process. We also provide a rigorous analysis of the desirable effects of the matching loss on feature representation learning: (1) extracting compact feature representation; (2) reducing over-adaption on mini-batches via an adaptively weighting mechanism; and (3) accommodating to multi-modalities. Finally, we conduct large-scale experiments on both image and text classifications to demonstrate its superior performance to the strong baselines.

Downloads

Published

2023-06-26

How to Cite

Li, X., Li, X., Pan, D., Qiang, Y., & Zhu, D. (2023). Learning Compact Features via In-Training Representation Alignment. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8675-8683. https://doi.org/10.1609/aaai.v37i7.26044

Issue

Section

AAAI Technical Track on Machine Learning II