Spatial Transform Decoupling for Oriented Object Detection
DOI:
https://doi.org/10.1609/aaai.v38i7.28502Keywords:
CV: Object Detection & Categorization, CV: Representation Learning for VisionAbstract
Vision Transformers (ViTs) have achieved remarkable success in computer vision tasks. However, their potential in rotation-sensitive scenarios has not been fully explored, and this limitation may be inherently attributed to the lack of spatial invariance in the data-forwarding process. In this study, we present a novel approach, termed Spatial Transform Decoupling (STD), providing a simple-yet-effective solution for oriented object detection with ViTs. Built upon stacked ViT blocks, STD utilizes separate network branches to predict the position, size, and angle of bounding boxes, effectively harnessing the spatial transform potential of ViTs in a divide-and-conquer fashion. Moreover, by aggregating cascaded activation masks (CAMs) computed upon the regressed parameters, STD gradually enhances features within regions of interest (RoIs), which complements the self-attention mechanism. Without bells and whistles, STD achieves state-of-the-art performance on the benchmark datasets including DOTA-v1.0 (82.24% mAP) and HRSC2016 (98.55% mAP), which demonstrates the effectiveness of the proposed method. Source code is available at https://github.com/yuhongtian17/Spatial-Transform-Decoupling.Downloads
Published
2024-03-24
How to Cite
Yu, H., Tian, Y., Ye, Q., & Liu, Y. (2024). Spatial Transform Decoupling for Oriented Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 38(7), 6782-6790. https://doi.org/10.1609/aaai.v38i7.28502
Issue
Section
AAAI Technical Track on Computer Vision VI