MVFNet: Multi-View Fusion Network for Efficient Video Recognition
Keywords:Video Understanding & Activity Analysis, Applications
AbstractConventionally, spatiotemporal modeling network and its complexity are the two most concentrated research topics in video action recognition. Existing state-of-the-art methods have achieved excellent accuracy regardless of the complexity meanwhile efficient spatiotemporal modeling solutions are slightly inferior in performance. In this paper, we attempt to acquire both efficiency and effectiveness simultaneously. First of all, besides traditionally treating H x W x T video frames as space-time signal (viewing from the Height-Width spatial plane), we propose to also model video from the other two Height-Time and Width-Time planes, to capture the dynamics of video thoroughly. Secondly, our model is designed based on 2D CNN backbones and model complexity is well kept in mind by design. Specifically, we introduce a novel multi-view fusion (MVF) module to exploit video dynamics using separable convolution for efficiency. It is a plug-and-play module and can be inserted into off-the-shelf 2D CNNs to form a simple yet effective model called MVFNet. Moreover, MVFNet can be thought of as a generalized video modeling framework and it can specialize to be existing methods such as C2D, SlowOnly, and TSM under different settings. Extensive experiments are conducted on popular benchmarks (i.e., Something-Something V1 & V2, Kinetics, UCF-101, and HMDB-51) to show its superiority. The proposed MVFNet can achieve state-of-the-art performance with 2D CNN's complexity.
How to Cite
Wu, W., He, D., Lin, T., Li, F., Gan, C., & Ding, E. (2021). MVFNet: Multi-View Fusion Network for Efficient Video Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 2943-2951. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16401
AAAI Technical Track on Computer Vision III