Beyond Prototypes: Semantic Anchor Regularization for Better Representation Learning

Authors

  • Yanqi Ge Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China
  • Qiang Nie Tencent Youtu Lab
  • Ye Huang Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China
  • Yong Liu Tencent Youtu Lab
  • Chengjie Wang Tencent Youtu Lab Shanghai Jiao Tong University
  • Feng Zheng Southern University of Science and Technology
  • Wen Li Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China
  • Lixin Duan Shenzhen Institute for Advanced Study, University of Electronic Science and Technology of China Sichuan Provincial People’s Hospital

DOI:

https://doi.org/10.1609/aaai.v38i3.27958

Keywords:

CV: Segmentation, CV: Representation Learning for Vision

Abstract

One of the ultimate goals of representation learning is to achieve compactness within a class and well-separability between classes. Many outstanding metric-based and prototype-based methods following the Expectation-Maximization paradigm, have been proposed for this objective. However, they inevitably introduce biases into the learning process, particularly with long-tail distributed training data. In this paper, we reveal that the class prototype is not necessarily to be derived from training features and propose a novel perspective to use pre-defined class anchors serving as feature centroid to unidirectionally guide feature learning. However, the pre-defined anchors may have a large semantic distance from the pixel features, which prevents them from being directly applied. To address this issue and generate feature centroid independent from feature learning, a simple yet effective Semantic Anchor Regularization (SAR) is proposed. SAR ensures the inter-class separability of semantic anchors in the semantic space by employing a classifier-aware auxiliary cross-entropy loss during training via disentanglement learning. By pulling the learned features to these semantic anchors, several advantages can be attained: 1) the intra-class compactness and naturally inter-class separability, 2) induced bias or errors from feature learning can be avoided, and 3) robustness to the long-tailed problem. The proposed SAR can be used in a plug-and-play manner in the existing models. Extensive experiments demonstrate that the SAR performs better than previous sophisticated prototype-based methods. The implementation is available at https://github.com/geyanqi/SAR.

Published

2024-03-24

How to Cite

Ge, Y., Nie, Q., Huang, Y., Liu, Y., Wang, C., Zheng, F., Li, W., & Duan, L. (2024). Beyond Prototypes: Semantic Anchor Regularization for Better Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 1887-1895. https://doi.org/10.1609/aaai.v38i3.27958

Issue

Section

AAAI Technical Track on Computer Vision II