Reinforced Multi-Label Image Classification by Exploring Curriculum

Authors

  • Shiyi He Peking University
  • Chang Xu UBTECH Sydney AI Centre, SIT, FEIT, University of Sydney
  • Tianyu Guo Peking University
  • Chao Xu Peking University
  • Dacheng Tao UBTECH Sydney AI Centre, SIT, FEIT, University of Sydney

DOI:

https://doi.org/10.1609/aaai.v32i1.11770

Abstract

Humans and animals learn much better when the examples are not randomly presented but organized in a meaningful order which illustrates gradually more concepts, and gradually more complex ones. Inspired by this curriculum learning mechanism, we propose a reinforced multi-label image classification approach imitating human behavior to label image from easy to complex. This approach allows a reinforcement learning agent to sequentially predict labels by fully exploiting image feature and previously predicted labels. The agent discovers the optimal policies through maximizing the long-term reward which reflects prediction accuracies. Experimental results on PASCAL VOC2007 and 2012 demonstrate the necessity of reinforcement multi-label learning and the algorithm’s effectiveness in real-world multi-label image classification tasks.

Downloads

Published

2018-04-29

How to Cite

He, S., Xu, C., Guo, T., Xu, C., & Tao, D. (2018). Reinforced Multi-Label Image Classification by Exploring Curriculum. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11770