VLCounter: Text-Aware Visual Representation for Zero-Shot Object Counting
DOI:
https://doi.org/10.1609/aaai.v38i3.28050Keywords:
CV: Language and Vision, CV: Applications, CV: Scene Analysis & UnderstandingAbstract
Zero-Shot Object Counting~(ZSOC) aims to count referred instances of arbitrary classes in a query image without human-annotated exemplars. To deal with ZSOC, preceding studies proposed a two-stage pipeline: discovering exemplars and counting. However, there remains a challenge of vulnerability to error propagation of the sequentially designed two-stage process. In this work, we propose an one-stage baseline, Visual-Language Baseline (VLBase), exploring the implicit association of the semantic-patch embeddings of CLIP. Subsequently, we extend the VLBase to Visual-language Counter (VLCounter) by incorporating three modules devised to tailor VLBase for object counting. First, we introduce Semantic-conditioned Prompt Tuning (SPT) within the image encoder to acquire target-highlighted representations. Second, Learnable Affine Transformation (LAT) is employed to translate the semantic-patch similarity map to be appropriate for the counting task. Lastly, we transfer the layer-wisely encoded features to the decoder through Segment-aware Skip Connection (SaSC) to keep the generalization capability for unseen classes. Through extensive experiments on FSC147, CARPK, and PUCPR+, we demonstrate the benefits of our end-to-end framework, VLCounter. Code is available at https://github.com/seunggu0305/VLCounterDownloads
Published
2024-03-24
How to Cite
Kang, S., Moon, W., Kim, E., & Heo, J.-P. (2024). VLCounter: Text-Aware Visual Representation for Zero-Shot Object Counting. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2714-2722. https://doi.org/10.1609/aaai.v38i3.28050
Issue
Section
AAAI Technical Track on Computer Vision II