Deep Embedded Non-Redundant Clustering


  • Lukas Miklautz University of Vienna
  • Dominik Mautz Ludwig-Maximilians-Universität München
  • Muzaffer Can Altinigneli Ludwig-Maximilians-Universität München
  • Christian Böhm Ludwig-Maximilians-Universität München
  • Claudia Plant University of Vienna



Complex data types like images can be clustered in multiple valid ways. Non-redundant clustering aims at extracting those meaningful groupings by discouraging redundancy between clusterings. Unfortunately, clustering images in pixel space directly has been shown to work unsatisfactory. This has increased interest in combining the high representational power of deep learning with clustering, termed deep clustering. Algorithms of this type combine the non-linear embedding of an autoencoder with a clustering objective and optimize both simultaneously. None of these algorithms try to find multiple non-redundant clusterings. In this paper, we propose the novel Embedded Non-Redundant Clustering algorithm (ENRC). It is the first algorithm that combines neural-network-based representation learning with non-redundant clustering. ENRC can find multiple highly non-redundant clusterings of different dimensionalities within a data set. This is achieved by (softly) assigning each dimension of the embedded space to the different clusterings. For instance, in image data sets it can group the objects by color, material and shape, without the need for explicit feature engineering. We show the viability of ENRC in extensive experiments and empirically demonstrate the advantage of combining non-linear representation learning with non-redundant clustering.




How to Cite

Miklautz, L., Mautz, D., Altinigneli, M. C., Böhm, C., & Plant, C. (2020). Deep Embedded Non-Redundant Clustering. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5174-5181.



AAAI Technical Track: Machine Learning