SpatioTemporal Difference Network for Video Depth Super-Resolution

Authors

  • Zhengxue Wang Nanjing University of Science and Technology
  • Yuan Wu Nanjing University of Science and Technology
  • Xiang Li Nankai University
  • Zhiqiang Yan National University of Singapore
  • Jian Yang Nanjing University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v40i12.38011

Abstract

Depth super-resolution has achieved impressive performance, and the incorporation of multi-frame information further enhances reconstruction quality. Nevertheless, statistical analyses reveal that video depth super-resolution remains affected by pronounced long-tailed distributions, with the long-tailed effects primarily manifesting in spatial non-smooth regions and temporal variation zones. To address these challenges, we propose a novel SpatioTemporal Difference Network (STDNet) comprising two core branches: a spatial difference branch and a temporal difference branch. In the spatial difference branch, we introduce a spatial difference mechanism to mitigate the long-tailed issues in spatial non-smooth regions. This mechanism dynamically aligns RGB features with learned spatial difference representations, enabling intra-frame RGB-D aggregation for depth calibration. In the temporal difference branch, we further design a temporal difference strategy that preferentially propagates temporal variation information from adjacent RGB and depth frames to the current depth frame, leveraging temporal difference representations to achieve precise motion compensation in temporal long-tailed areas. Extensive experimental results across multiple datasets demonstrate the effectiveness of our STDNet, outperforming existing approaches.

Published

2026-03-14

How to Cite

Wang, Z., Wu, Y., Li, X., Yan, Z., & Yang, J. (2026). SpatioTemporal Difference Network for Video Depth Super-Resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 40(12), 10403–10411. https://doi.org/10.1609/aaai.v40i12.38011

Issue

Section

AAAI Technical Track on Computer Vision IX