Semantic-guided Reinforced Region Embedding for Generalized Zero-Shot Learning

Authors

  • Jiannan Ge University of Science and Technology of China
  • Hongtao Xie University of Science and Technology of China
  • Shaobo Min University of Science and Technology of China
  • Yongdong Zhang University of Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v35i2.16230

Keywords:

Object Detection & Categorization, Multi-modal Vision

Abstract

Generalized zero-shot Learning (GZSL) aims to recognize images from either seen or unseen domain, mainly by learning a joint embedding space to associate image features with the corresponding category descriptions. Recent methods have proved that localizing important object regions can effectively bridge the semantic-visual gap. However, these are all based on one-off visual localizers, lacking of interpretability and flexibility. In this paper, we propose a novel Semantic-guided Reinforced Region Embedding (SR2E) network that can localize important objects in the long-term interests to construct semantic-visual embedding space. SR2E consists of Reinforced Region Module (R2M) and Semantic Alignment Module (SAM). First, without the annotated bounding box as supervision, R2M encodes the semantic category guidance into the reward and punishment criteria to teach the localizer serialized region searching. Besides, R2M explores different action spaces during the serialized searching path to avoid local optimal localization, which thereby generates discriminative visual features with less redundancy. Second, SAM preserves the semantic relationship into visual features via semantic-visual alignment and designs a domain detector to alleviate the domain confusion. Experiments on four public benchmarks demonstrate that the proposed SR2E is an effective GZSL method with reinforced embedding space, which obtains averaged 6.1% improvements.

Downloads

Published

2021-05-18

How to Cite

Ge, J., Xie, H., Min, S., & Zhang, Y. (2021). Semantic-guided Reinforced Region Embedding for Generalized Zero-Shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1406-1414. https://doi.org/10.1609/aaai.v35i2.16230

Issue

Section

AAAI Technical Track on Computer Vision I