DTTNet: Improving Video Shadow Detection via Dark-Aware Guidance and Tokenized Temporal Modeling
DOI:
https://doi.org/10.1609/aaai.v40i8.37607Abstract
Video shadow detection confronts two entwined difficulties: distinguishing shadows from complex backgrounds and modeling dynamic shadow deformations under varying illumination. To address shadow-background ambiguity, we leverage linguistic priors through the proposed Vision-language Match Module (VMM) and a Dark-aware Semantic Block (DSB), extracting text-guided features to explicitly differentiate shadows from dark objects. Furthermore, we introduce adaptive mask reweighting to downweight penumbra regions during training and apply edge masks at the final decoder stage for better supervision. For temporal modeling of variable shadow shapes, we propose a Tokenized Temporal Block (TTB) that decouples spatiotemporal learning. TTB summarizes cross-frame shadow semantics into learnable temporal tokens, enabling efficient sequence encoding with minimal computation overhead. Comprehensive Experiments on multiple benchmark datasets demonstrate state-of-the-art accuracy and real-time inference efficiency.Downloads
Published
2026-03-14
How to Cite
Li, Z., Sun, K., Yao, R., Zhu, H., Hu, F., Zhao, J., … Zhou, Y. (2026). DTTNet: Improving Video Shadow Detection via Dark-Aware Guidance and Tokenized Temporal Modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 40(8), 6753–6761. https://doi.org/10.1609/aaai.v40i8.37607
Issue
Section
AAAI Technical Track on Computer Vision V