Compact Multi-Label Learning

Authors

  • Xiaobo Shen Nanyang Technological University
  • Weiwei Liu The University of New South Wales
  • Ivor Tsang University of Technology Sydney
  • Quan-Sen Sun Nanjing University of Science and Technology
  • Yew-Soon Ong Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v32i1.11708

Abstract

Embedding methods have shown promising performance in multi-label prediction, as they can discover the dependency of labels. Most embedding methods cannot well align the input and output, which leads to degradation in prediction performance. Besides, they suffer from expensive prediction computational costs when applied to large-scale datasets. To address the above issues, this paper proposes a Co-Hashing (CoH) method by formulating multi-label learning from the perspective of cross-view learning. CoH first regards the input and output as two views, and then aims to learn a common latent hamming space, where input and output pairs are compressed into compact binary embeddings. CoH enjoys two key benefits: 1) the input and output can be well aligned, and their correlations are explored; 2) the prediction is very efficient using fast cross-view kNN search in the hamming space. Moreover, we provide the generalization error bound for our method. Extensive experiments on eight real-world datasets demonstrate the superiority of the proposed CoH over the state-of-the-art methods in terms of both prediction accuracy and efficiency.

Downloads

Published

2018-04-29

How to Cite

Shen, X., Liu, W., Tsang, I., Sun, Q.-S., & Ong, Y.-S. (2018). Compact Multi-Label Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11708