Learning Aligned Cross-Modal Representation for Generalized Zero-Shot Classification

Authors

  • Zhiyu Fang University of Science and Technology Beijing
  • Xiaobin Zhu University of Science and Technology Beijing
  • Chun Yang University of Science and Technology Beijing
  • Zheng Han University of Science and Technology Beijing
  • Jingyan Qin University of Science and Technology Beijing
  • Xu-Cheng Yin University of Science and Technology Beijing

DOI:

https://doi.org/10.1609/aaai.v36i6.20614

Keywords:

Machine Learning (ML)

Abstract

Learning a common latent embedding by aligning the latent spaces of cross-modal autoencoders is an effective strategy for Generalized Zero-Shot Classification (GZSC). However, due to the lack of fine-grained instance-wise annotations, it still easily suffer from the domain shift problem for the discrepancy between the visual representation of diversified images and the semantic representation of fixed attributes. In this paper, we propose an innovative autoencoder network by learning Aligned Cross-Modal Representations (dubbed ACMR) for GZSC. Specifically, we propose a novel Vision-Semantic Alignment (VSA) method to strengthen the alignment of cross-modal latent features on the latent subspaces guided by a learned classifier. In addition, we propose a novel Information Enhancement Module (IEM) to reduce the possibility of latent variables collapse meanwhile encouraging the discriminative ability of latent variables. Extensive experiments on publicly available datasets demonstrate the state-of-the-art performance of our method.

Downloads

Published

2022-06-28

How to Cite

Fang, Z., Zhu, X., Yang, C., Han, Z., Qin, J., & Yin, X.-C. (2022). Learning Aligned Cross-Modal Representation for Generalized Zero-Shot Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6605-6613. https://doi.org/10.1609/aaai.v36i6.20614

Issue

Section

AAAI Technical Track on Machine Learning I