Deep Semantic Dictionary Learning for Multi-label Image Classification
Keywords:Object Detection & Categorization
AbstractCompared with single-label image classification, multi-label image classification is more practical and challenging. Some recent studies attempted to leverage the semantic information of categories for improving multi-label image classification performance. However, these semantic-based methods only take semantic information as type of complements for visual representation without further exploitation. In this paper, we present an innovative path towards the solution of the multi-label image classification which considers it as a dictionary learning task. A novel end-to-end model named Deep Semantic Dictionary Learning (DSDL) is designed. In DSDL, an auto-encoder is applied to generate the semantic dictionary from class-level semantics and then such dictionary is utilized for representing the visual features extracted by Convolutional Neural Network (CNN) with label embeddings. The DSDL provides a simple but elegant way to exploit and reconcile the label, semantic and visual spaces simultaneously via conducting the dictionary learning among them. Moreover, inspired by iterative optimization of traditional dictionary learning, we further devise a novel training strategy named Alternately Parameters Update Strategy (APUS) for optimizing DSDL, which alternately optimizes the representation coefficients and the semantic dictionary in forward and backward propagation. Extensive experimental results on three popular benchmarks demonstrate that our method achieves promising performances in comparison with the state-of-the-arts. Our codes and models have been released.
How to Cite
Zhou, F., Huang, S., & Xing, Y. (2021). Deep Semantic Dictionary Learning for Multi-label Image Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3572-3580. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16472
AAAI Technical Track on Computer Vision III