Difficulty-aware Balancing Margin Loss for Long-tailed Recognition
DOI:
https://doi.org/10.1609/aaai.v39i19.34261Abstract
When trained with severely imbalanced data, deep neural networks often struggle to accurately recognize classes with few samples. Previous studies in long-tailed recognition have attempted to rebalance biased learning using known sample distributions, primarily addressing different classification difficulties at the class level. However, these approaches often overlook the instance difficulty variation within each class. In this paper, we propose a difficulty-aware balancing margin (DBM) loss, which considers both class imbalance and instance difficulty. DBM loss comprises two components: a class-wise margin to mitigate learning bias caused by imbalanced class frequencies, and an instance-wise margin assigned to hard positive samples based on their individual difficulty. DBM loss improves class discriminativity by assigning larger margins to more difficult samples. Our method effortlessly combine with existing approaches and consistently improves performance across various long-tailed recognition benchmarks.Downloads
Published
2025-04-11
How to Cite
Son, M., Koo, I., Park, J., & Kim, C. (2025). Difficulty-aware Balancing Margin Loss for Long-tailed Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 39(19), 20522–20530. https://doi.org/10.1609/aaai.v39i19.34261
Issue
Section
AAAI Technical Track on Machine Learning V