Memory-Based Jitter: Improving Visual Recognition on Long-Tailed Data with Diversity in Memory
Keywords:Computer Vision (CV), Machine Learning (ML)
AbstractThis paper considers deep visual recognition on long-tailed data. To make our method general, we tackle two applied scenarios, i.e. , deep classification and deep metric learning. Under the long-tailed data distribution, the most classes (i.e., tail classes) only occupy relatively few samples and are prone to lack of within-class diversity. A radical solution is to augment the tail classes with higher diversity. To this end, we introduce a simple and reliable method named Memory-based Jitter (MBJ). We observe that during training, the deep model constantly changes its parameters after every iteration, yielding the phenomenon of weight jitters. Consequentially, given a same image as the input, two historical editions of the model generate two different features in the deeply-embedded space, resulting in feature jitters. Using a memory bank, we collect these (model or feature) jitters across multiple training iterations and get the so-called Memory-based Jitter. The accumulated jitters enhance the within-class diversity for the tail classes and consequentially improves long-tailed visual recognition. With slight modifications, MBJ is applicable for two fundamental visual recognition tasks, i.e., deep image classification and deep metric learning (on long-tailed data). Extensive experiments on five long-tailed classification benchmarks and two deep metric learning benchmarks demonstrate significant improvement. Moreover, the achieved performance are on par with the state of the art on both tasks.
How to Cite
Liu, J., Li, W., & Sun, Y. (2022). Memory-Based Jitter: Improving Visual Recognition on Long-Tailed Data with Diversity in Memory. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1720-1728. https://doi.org/10.1609/aaai.v36i2.20064
AAAI Technical Track on Computer Vision II