Learning from Weakly-Labeled Web Videos via Exploring Sub-concepts
Keywords:Computer Vision (CV)
AbstractLearning visual knowledge from massive weakly-labeled web videos has attracted growing research interests thanks to the large corpus of easily accessible video data on the Internet. However, for video action recognition, the action of interest might only exist in arbitrary clips of untrimmed web videos, resulting in high label noises in the temporal space. To address this challenge, we introduce a new method for pre-training video action recognition models using queried web videos. Instead of trying to filter out potential noises, we propose to provide fine-grained supervision signals by defining the concept of Sub-Pseudo Label (SPL). Specifically, SPL spans out a new set of meaningful "middle ground" label space constructed by extrapolating the original weak labels during video querying and the prior knowledge distilled from a teacher model. Consequently, SPL provides enriched supervision for video models to learn better representations and improves data utilization efficiency of untrimmed videos. We validate the effectiveness of our method on four video action recognition datasets and a weakly-labeled image dataset. Experiments show that SPL outperforms several existing pre-training strategies and the learned representations lead to competitive results on several benchmarks.
How to Cite
Li, K., Zhang, Z., Wu, G., Xiong, X., Lee, C.-Y., Lu, Z., Fu, Y., & Pfister, T. (2022). Learning from Weakly-Labeled Web Videos via Exploring Sub-concepts. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 1341-1349. https://doi.org/10.1609/aaai.v36i2.20022
AAAI Technical Track on Computer Vision II