Long-Tailed Learning as Multi-Objective Optimization
DOI:
https://doi.org/10.1609/aaai.v38i4.28103Keywords:
CV: Bias, Fairness & Privacy, CV: Learning & Optimization for CV, CV: Other Foundations of Computer VisionAbstract
Real-world data is extremely imbalanced and presents a long-tailed distribution, resulting in models biased towards classes with sufficient samples and performing poorly on rare classes. Recent methods propose to rebalance classes but they undertake the seesaw dilemma (what is increasing performance on tail classes may decrease that of head classes, and vice versa). In this paper, we argue that the seesaw dilemma is derived from the gradient imbalance of different classes, in which gradients of inappropriate classes are set to important for updating, thus prone to overcompensation or undercompensation on tail classes. To achieve ideal compensation, we formulate long-tailed recognition as a multi-objective optimization problem, which fairly respects the contributions of head and tail classes simultaneously. For efficiency, we propose a Gradient-Balancing Grouping (GBG) strategy to gather the classes with similar gradient directions, thus approximately making every update under a Pareto descent direction. Our GBG method drives classes with similar gradient directions to form a more representative gradient and provides ideal compensation to the tail classes. Moreover, we conduct extensive experiments on commonly used benchmarks in long-tailed learning and demonstrate the superiority of our method over existing SOTA methods. Our code is released at https://github.com/WickyLee1998/GBG_v1.Downloads
Published
2024-03-24
How to Cite
Li, W., Lyu, F., Shang, F., Wan, L., & Feng, W. (2024). Long-Tailed Learning as Multi-Objective Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3190-3198. https://doi.org/10.1609/aaai.v38i4.28103
Issue
Section
AAAI Technical Track on Computer Vision III