LRSC: Learning Representations for Subspace Clustering
AbstractDeep learning based subspace clustering methods have attracted increasing attention in recent years, where a basic theme is to non-linearly map data into a latent space, and then uncover subspace structures based upon the data self-expressiveness property. However, almost all existing deep subspace clustering methods only rely on target domain data, and always resort to shallow neural networks for modeling data, leaving huge room to design more effective representation learning mechanisms tailored for subspace clustering. In this paper, we propose a novel subspace clustering framework through learning precise sample representations. In contrast to previous approaches, the proposed method aims to leverage external data through constructing lots of relevant tasks to guide the training of the encoder, motivated by the idea of meta-learning. Considering limited layer structures of current deep subspace clustering models, we intend to distill knowledge from a deeper network trained on the external data, and transfer it into the shallower model. To reach the above two goals, we propose a new loss function to realize them in a joint framework. Moreover, we propose to construct a new pretext task for self-supervised training of the model, such that the representation ability of the model can be further improved. Extensive experiments are performed on four publicly available datasets, and experimental results clearly demonstrate the efficacy of our method, compared to state-of-the-art methods.
How to Cite
Li, C., Yang, C., Liu, B., Yuan, Y., & Wang, G. (2021). LRSC: Learning Representations for Subspace Clustering. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8340-8348. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17014
AAAI Technical Track on Machine Learning II