TY - JOUR AU - Li, Changsheng AU - Mao, Kaihang AU - Liang, Lingyan AU - Ren, Dongchun AU - Zhang, Wei AU - Yuan, Ye AU - Wang, Guoren PY - 2021/05/18 Y2 - 2024/03/29 TI - Unsupervised Active Learning via Subspace Learning JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 9 SE - AAAI Technical Track on Machine Learning II DO - 10.1609/aaai.v35i9.17013 UR - https://ojs.aaai.org/index.php/AAAI/article/view/17013 SP - 8332-8339 AB - Unsupervised active learning has been an active research topic in machine learning community, with the purpose of choosing representative samples to be labelled in an unsupervised manner. Previous works usually take the minimization of data reconstruction loss as the criterion to select representative samples which can better approximate original inputs. However, data are often drawn from low-dimensional subspaces embedded in an arbitrary high-dimensional space in many scenarios, thus it might severely bring in noise if attempting to precisely reconstruct all entries of one observation, leading to a suboptimal solution. In view of this, this paper proposes a novel unsupervised Active Learning model via Subspace Learning, called ALSL. In contrast to previous approaches, ALSL aims to discovery the low-rank structures of data, and then perform sample selection based on learnt low-rank representations. To this end, we devise two different strategies and propose two corresponding formulations to perform unsupervised active learning with and under low-rank sample representations respectively. Since the proposed formulations involve several non-smooth regularization terms, we develop a simple but effective optimization procedure to solve them. Extensive experiments are performed on five publicly available datasets, and experimental results demonstrate the proposed first formulation achieves comparable performance with the state-of-the-arts, while the second formulation significantly outperforms them, achieving a 13\% improvement over the second best baseline at most. ER -