HAL: Improved Text-Image Matching by Mitigating Visual Semantic Hubs

Authors

  • Fangyu Liu University of Cambridge
  • Rongtian Ye Aalto University
  • Xun Wang Malong Technologies
  • Shuaipeng Li SenseTime Research

DOI:

https://doi.org/10.1609/aaai.v34i07.6823

Abstract

The hubness problem widely exists in high-dimensional embedding space and is a fundamental source of error for cross-modal matching tasks. In this work, we study the emergence of hubs in Visual Semantic Embeddings (VSE) with application to text-image matching. We analyze the pros and cons of two widely adopted optimization objectives for training VSE and propose a novel hubness-aware loss function (Hal) that addresses previous methods' defects. Unlike (Faghri et al. 2018) which simply takes the hardest sample within a mini-batch, Hal takes all samples into account, using both local and global statistics to scale up the weights of “hubs”. We experiment our method with various configurations of model architectures and datasets. The method exhibits exceptionally good robustness and brings consistent improvement on the task of text-image matching across all settings. Specifically, under the same model architectures as (Faghri et al. 2018) and (Lee et al. 2018), by switching only the learning objective, we report a maximum R@1 improvement of 7.4% on MS-COCO and 8.3% on Flickr30k.1

Downloads

Published

2020-04-03

How to Cite

Liu, F., Ye, R., Wang, X., & Li, S. (2020). HAL: Improved Text-Image Matching by Mitigating Visual Semantic Hubs. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11563-11571. https://doi.org/10.1609/aaai.v34i07.6823

Issue

Section

AAAI Technical Track: Vision