MEID: Mixture-of-Experts with Internal Distillation for Long-Tailed Video Recognition

Authors

  • Xinjie Li Pennsylvania State University
  • Huijuan Xu Pennsylvania State University

DOI:

https://doi.org/10.1609/aaai.v37i2.25230

Keywords:

CV: Video Understanding & Activity Analysis

Abstract

The long-tailed video recognition problem is especially challenging, as videos tend to be long and untrimmed, and each video may contain multiple classes, causing frame-level class imbalance. The previous method tackles the long-tailed video recognition only through frame-level sampling for class re-balance without distinguishing the frame-level feature representation between head and tail classes. To improve the frame-level feature representation of tail classes, we modulate the frame-level features with an auxiliary distillation loss to reduce the distribution distance between head and tail classes. Moreover, we design a mixture-of-experts framework with two different expert designs, i.e., the first expert with an attention-based classification network handling the original long-tailed distribution, and the second expert dealing with the re-balanced distribution from class-balanced sampling. Notably, in the second expert, we specifically focus on the frames unsolved by the first expert through designing a complementary frame selection module, which inherits the attention weights from the first expert and selects frames with low attention weights, and we also enhance the motion feature representation for these selected frames. To highlight the multi-label challenge in long-tailed video recognition, we create two additional benchmarks based on Charades and CharadesEgo videos with the multi-label property, called CharadesLT and CharadesEgoLT. Extensive experiments are conducted on the existing long-tailed video benchmark VideoLT and the two new benchmarks to verify the effectiveness of our proposed method with state-of-the-art performance. The code and proposed benchmarks are released at https://github.com/VisionLanguageLab/MEID.

Downloads

Published

2023-06-26

How to Cite

Li, X., & Xu, H. (2023). MEID: Mixture-of-Experts with Internal Distillation for Long-Tailed Video Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 37(2), 1451-1459. https://doi.org/10.1609/aaai.v37i2.25230

Issue

Section

AAAI Technical Track on Computer Vision II