TransZero: Attribute-Guided Transformer for Zero-Shot Learning
DOI:
https://doi.org/10.1609/aaai.v36i1.19909Keywords:
Computer Vision (CV), Machine Learning (ML)Abstract
Zero-shot learning (ZSL) aims to recognize novel classes by transferring semantic knowledge from seen classes to unseen ones. Semantic knowledge is learned from attribute descriptions shared between different classes, which are strong prior for localization of object attribute for representing discriminative region features enabling significant visual-semantic interaction. Although few attention-based models have attempted to learn such region features in a single image, the transferability and discriminative attribute localization of visual features are typically neglected. In this paper, we propose an attribute-guided Transformer network to learn the attribute localization for discriminative visual-semantic embedding representations in ZSL, termed TransZero. Specifically, TransZero takes a feature augmentation encoder to alleviate the cross-dataset bias between ImageNet and ZSL benchmarks and improve the transferability of visual features by reducing the entangled relative geometry relationships among region features. To learn locality-augmented visual features, TransZero employs a visual-semantic decoder to localize the most relevant image regions to each attributes from a given image under the guidance of attribute semantic information. Then, the locality-augmented visual features and semantic vectors are used for conducting effective visual-semantic interaction in a visual-semantic embedding network. Extensive experiments show that TransZero achieves a new state-of-the-art on three ZSL benchmarks. The codes are available at: https://github.com/shiming-chen/TransZero.Downloads
Published
2022-06-28
How to Cite
Chen, S., Hong, Z., Liu, Y., Xie, G.-S., Sun, B., Li, H., Peng, Q., Lu, K., & You, X. (2022). TransZero: Attribute-Guided Transformer for Zero-Shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 330-338. https://doi.org/10.1609/aaai.v36i1.19909
Issue
Section
AAAI Technical Track on Computer Vision I