Zero Shot Learning with the Isoperimetric Loss

Authors

  • Shay Deutsch UCLA
  • Andrea Bertozzi UCLA
  • Stefano Soatto UCLA

DOI:

https://doi.org/10.1609/aaai.v34i07.6698

Abstract

We introduce the isoperimetric loss as a regularization criterion for learning the map from a visual representation to a semantic embedding, to be used to transfer knowledge to unknown classes in a zero-shot learning setting. We use a pre-trained deep neural network model as a visual representation of image data, a Word2Vec embedding of class labels, and linear maps between the visual and semantic embedding spaces. However, the spaces themselves are not linear, and we postulate the sample embedding to be populated by noisy samples near otherwise smooth manifolds. We exploit the graph structure defined by the sample points to regularize the estimates of the manifolds by inferring the graph connectivity using a generalization of the isoperimetric inequalities from Riemannian geometry to graphs. Surprisingly, this regularization alone, paired with the simplest baseline model, outperforms the state-of-the-art among fully automated methods in zero-shot learning benchmarks such as AwA and CUB. This improvement is achieved solely by learning the structure of the underlying spaces by imposing regularity.

Downloads

Published

2020-04-03

How to Cite

Deutsch, S., Bertozzi, A., & Soatto, S. (2020). Zero Shot Learning with the Isoperimetric Loss. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 10704-10712. https://doi.org/10.1609/aaai.v34i07.6698

Issue

Section

AAAI Technical Track: Vision