Engineering the Neural Collapse Geometry of Supervised-Contrastive Loss (Student Abstract)

Authors

  • Jaidev Gill University of British Columbia
  • Vala Vakilian University of British Columbia
  • Christos Thrampoulidis University of British Columbia

DOI:

https://doi.org/10.1609/aaai.v38i21.30447

Keywords:

Neural Collapse, Feature Geometry, Supervised Contrastive

Abstract

Supervised-contrastive loss (SCL) is an alternative to cross-entropy (CE) for classification tasks that makes use of similarities in the embedding space to allow for richer representations. Previous works have used trainable prototypes to help improve test accuracy of SCL when training under imbalance. In this work, we propose the use of fixed prototypes to help engineering the feature geometry when training with SCL. We gain further insights by considering a limiting scenario where the number of prototypes far outnumber the original batch size. Through this, we establish a connection to CE loss with a fixed classifier and normalized embeddings. We validate our findings by conducting a series of experiments with deep neural networks on benchmark vision datasets.

Downloads

Published

2024-03-24

How to Cite

Gill, J., Vakilian, V., & Thrampoulidis, C. (2024). Engineering the Neural Collapse Geometry of Supervised-Contrastive Loss (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23503-23505. https://doi.org/10.1609/aaai.v38i21.30447