Joint Multi-View Representation Learning and Image Tagging

Authors

  • Zhe Xue University of Chinese Academy of Sciences
  • Guorong Li University of Chinese Academy of Sciences
  • Qingming Huang University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v30i1.10147

Keywords:

Image Tagging, Image Representation, Multi-View Learning

Abstract

Automatic image annotation is an important problem in several machine learning applications such as image search. Since there exists a semantic gap between low-level image features and high-level semantics, the description ability of image representation can largely affect annotation results. In fact, image representation learning and image tagging are two closely related tasks. A proper image representation can achieve better image annotation results, and image tags can be treated as guidance to learn more effective image representation. In this paper, we present an optimal predictive subspace learning method which jointly conducts multi-view representation learning and image tagging. The two tasks can promote each other and the annotation performance can be further improved. To make the subspace to be more compact and discriminative, both visual structure and semantic information are exploited during learning. Moreover, we introduce powerful predictors (SVM) for image tagging to achieve better annotation performance. Experiments on standard image annotation datasets demonstrate the advantages of our method over the existing image annotation methods.

Downloads

Published

2016-02-21

How to Cite

Xue, Z., Li, G., & Huang, Q. (2016). Joint Multi-View Representation Learning and Image Tagging. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10147

Issue

Section

Technical Papers: Machine Learning Applications