Joint Dictionaries for Zero-Shot Learning

Authors

  • Soheil Kolouri HRL Laboratories, LLC
  • Mohammad Rostami University of Pennsylvannia
  • Yuri Owechko HRL Laboratories, LLC
  • Kyungnam Kim HRL Laboratories, LLC

DOI:

https://doi.org/10.1609/aaai.v32i1.11649

Keywords:

Zero-Shot Learning, Joint Dictionary Learning

Abstract

A classic approach toward zero-shot learning (ZSL) is to map the input domain to a set of semantically meaningful attributes that could be used later on to classify unseen classes of data (e.g. visual data). In this paper, we propose to learn a visual feature dictionary that has semantically meaningful atoms. Such a dictionary is learned via joint dictionary learning for the visual domain and the attribute domain, while enforcing the same sparse coding for both dictionaries. Our novel attribute aware formulation provides an algorithmic solution to the domain shift/hubness problem in ZSL. Upon learning the joint dictionaries, images from unseen classes can be mapped into the attribute space by finding the attribute aware joint sparse representation using solely the visual data. We demonstrate that our approach provides superior or comparable performance to that of the state of the art on benchmark datasets.

Downloads

Published

2018-04-29

How to Cite

Kolouri, S., Rostami, M., Owechko, Y., & Kim, K. (2018). Joint Dictionaries for Zero-Shot Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11649