Cross-Modal Similarity Learning via Pairs, Preferences, and Active Supervision

Authors

  • Yi Zhen Georgia Institute of Technology
  • Piyush Rai Duke University
  • Hongyuan Zha Georgia Institute of Technology
  • Lawrence Carin Duke University

DOI:

https://doi.org/10.1609/aaai.v29i1.9599

Keywords:

similarity learning

Abstract

We present a probabilistic framework for learning pairwise similarities between objects belonging to different modalities, such as drugs and proteins, or text and images. Our framework is based on learning a binary code based representation for objects in each modality, and has the following key properties: (i) it can leverage both pairwise as well as easy-to-obtain relative preference based cross-modal constraints, (ii) the probabilistic framework naturally allows querying for the most useful/informative constraints, facilitating an active learning setting (existing methods for cross-modal similarity learning do not have such a mechanism), and (iii) the binary code length is learned from the data. We demonstrate the effectiveness of the proposed approach on two problems that require computing pairwise similarities between cross-modal object pairs: cross-modal link prediction in bipartite graphs, and hashing based cross-modal similarity search.

Downloads

Published

2015-02-21

How to Cite

Zhen, Y., Rai, P., Zha, H., & Carin, L. (2015). Cross-Modal Similarity Learning via Pairs, Preferences, and Active Supervision. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9599

Issue

Section

Main Track: Novel Machine Learning Algorithms