TY - JOUR AU - Ge, Jiannan AU - Xie, Hongtao AU - Min, Shaobo AU - Zhang, Yongdong PY - 2021/05/18 Y2 - 2024/03/29 TI - Semantic-guided Reinforced Region Embedding for Generalized Zero-Shot Learning JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 2 SE - AAAI Technical Track on Computer Vision I DO - 10.1609/aaai.v35i2.16230 UR - https://ojs.aaai.org/index.php/AAAI/article/view/16230 SP - 1406-1414 AB - Generalized zero-shot Learning (GZSL) aims to recognize images from either seen or unseen domain, mainly by learning a joint embedding space to associate image features with the corresponding category descriptions. Recent methods have proved that localizing important object regions can effectively bridge the semantic-visual gap. However, these are all based on one-off visual localizers, lacking of interpretability and flexibility. In this paper, we propose a novel Semantic-guided Reinforced Region Embedding (SR2E) network that can localize important objects in the long-term interests to construct semantic-visual embedding space. SR2E consists of Reinforced Region Module (R2M) and Semantic Alignment Module (SAM). First, without the annotated bounding box as supervision, R2M encodes the semantic category guidance into the reward and punishment criteria to teach the localizer serialized region searching. Besides, R2M explores different action spaces during the serialized searching path to avoid local optimal localization, which thereby generates discriminative visual features with less redundancy. Second, SAM preserves the semantic relationship into visual features via semantic-visual alignment and designs a domain detector to alleviate the domain confusion. Experiments on four public benchmarks demonstrate that the proposed SR2E is an effective GZSL method with reinforced embedding space, which obtains averaged 6.1% improvements. ER -