Auto-Encoding Transformations in Reparameterized Lie Groups for Unsupervised Learning

Authors

  • Feng Lin CAS Key Laboratory of GIPAS, EEIS Department, University of Science and Technology of China
  • Haohang Xu Department of Electronic Engineering, Shanghai Jiao Tong University
  • Houqiang Li CAS Key Laboratory of GIPAS, EEIS Department, University of Science and Technology of China Institute of Artificial Intelligence, Hefei Comprehensive National Science Center
  • Hongkai Xiong Department of Electronic Engineering, Shanghai Jiao Tong University
  • Guo-Jun Qi Laboratory for MAPLE, Futurewei Technologies

DOI:

https://doi.org/10.1609/aaai.v35i10.17044

Keywords:

Representation Learning

Abstract

Unsupervised training of deep representations has demonstrated remarkable potentials in mitigating the prohibitive expenses on annotating labeled data recently. Among them is predicting transformations as a pretext task to self-train representations, which has shown great potentials for unsupervised learning. However, existing approaches in this category learn representations by either treating a discrete set of transformations as separate classes, or using the Euclidean distance as the metric to minimize the errors between transformations. None of them has been dedicated to revealing the vital role of the geometry of transformation groups in learning representations. Indeed, an image must continuously transform along the curved manifold of a transformation group rather than through a straight line in the forbidden ambient Euclidean space. This suggests the use of geodesic distance to minimize the errors between the estimated and groundtruth transformations. Particularly, we focus on homographies, a general group of planar transformations containing the Euclidean, similarity and affine transformations as its special cases. To avoid an explicit computing of intractable Riemannian logarithm, we project homographies onto an alternative group of rotation transformations SR(3) with a tractable form of geodesic distance. Experiments demonstrate the proposed approach to Auto-Encoding Transformations exhibits superior performances on a variety of recognition problems.

Downloads

Published

2021-05-18

How to Cite

Lin, F., Xu, H., Li, H., Xiong, H., & Qi, G.-J. (2021). Auto-Encoding Transformations in Reparameterized Lie Groups for Unsupervised Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8610-8617. https://doi.org/10.1609/aaai.v35i10.17044

Issue

Section

AAAI Technical Track on Machine Learning III