VPDETR: End-to-End Vanishing Point DEtection TRansformers
DOI:
https://doi.org/10.1609/aaai.v38i2.27881Keywords:
CV: 3D Computer Vision, CV: Low Level & Physics-based VisionAbstract
In the field of vanishing point detection, previous works commonly relied on extracting and clustering straight lines or classifying candidate points as vanishing points. This paper proposes a novel end-to-end framework, called VPDETR (Vanishing Point DEtection TRansformer), that views vanishing point detection as a set prediction problem, applicable to both Manhattan and non-Manhattan world datasets. By using the positional embedding of anchor points as queries in Transformer decoders and dynamically updating them layer by layer, our method is able to directly input images and output their vanishing points without the need for explicit straight line extraction and candidate points sampling. Additionally, we introduce an orthogonal loss and a cross-prediction loss to improve accuracy on the Manhattan world datasets. Experimental results demonstrate that VPDETR achieves competitive performance compared to state-of-the-art methods, without requiring post-processing.Downloads
Published
2024-03-24
How to Cite
Chen, T., Ying, X., Yang, J., Wang, R., Guo, R., Xing, B., & Shi, J. (2024). VPDETR: End-to-End Vanishing Point DEtection TRansformers. Proceedings of the AAAI Conference on Artificial Intelligence, 38(2), 1192–1200. https://doi.org/10.1609/aaai.v38i2.27881
Issue
Section
AAAI Technical Track on Computer Vision I