Spatio-Temporal Recurrent Networks for Event-Based Optical Flow Estimation


  • Ziluo Ding Peking University
  • Rui Zhao Peking University
  • Jiyuan Zhang Peking University
  • Tianxiao Gao Peking University
  • Ruiqin Xiong Peking University
  • Zhaofei Yu Peking University
  • Tiejun Huang Peking University



Computer Vision (CV)


Event camera has offered promising alternative for visual perception, especially in high speed and high dynamic range scenes. Recently, many deep learning methods have shown great success in providing model-free solutions to many event-based problems, such as optical flow estimation. However, existing deep learning methods did not address the importance of temporal information well from the perspective of architecture design and cannot effectively extract spatio-temporal features. Another line of research that utilizes Spiking Neural Network suffers from training issues for deeper architecture. To address these points, a novel input representation is proposed that captures the events temporal distribution for signal enhancement. Moreover, we introduce a spatio-temporal recurrent encoding-decoding neural network architecture for event-based optical flow estimation, which utilizes Convolutional Gated Recurrent Units to extract feature maps from a series of event images. Besides, our architecture allows some traditional frame-based core modules, such as correlation layer and iterative residual refine scheme, to be incorporated. The network is end-to-end trained with self-supervised learning on the Multi-Vehicle Stereo Event Camera dataset. We have shown that it outperforms all the existing state-of-the-art methods by a large margin.




How to Cite

Ding, Z., Zhao, R., Zhang, J., Gao, T., Xiong, R., Yu, Z., & Huang, T. (2022). Spatio-Temporal Recurrent Networks for Event-Based Optical Flow Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 525-533.



AAAI Technical Track on Computer Vision I