Building Deep Networks on Grassmann Manifolds

Authors

  • Zhiwu Huang ETH Zurich
  • Jiqing Wu ETH Zurich
  • Luc Van Gool ETH Zurich

DOI:

https://doi.org/10.1609/aaai.v32i1.11725

Keywords:

Grassmann manifolds, Grassmann networks

Abstract

Learning representations on Grassmann manifolds is popular in quite a few visual recognition tasks. In order to enable deep learning on Grassmann manifolds, this paper proposes a deep network architecture by generalizing the Euclidean network paradigm to Grassmann manifolds. In particular, we design full rank mapping layers to transform input Grassmannian data to more desirable ones, exploit re-orthonormalization layers to normalize the resulting matrices, study projection pooling layers to reduce the model complexity in the Grassmannian context, and devise projection mapping layers to respect Grassmannian geometry and meanwhile achieve Euclidean forms for regular output layers. To train the Grassmann networks, we exploit a stochastic gradient descent setting on manifolds of the connection weights, and study a matrix generalization of backpropagation to update the structured data. The evaluations on three visual recognition tasks show that our Grassmann networks have clear advantages over existing Grassmann learning methods, and achieve results comparable with state-of-the-art approaches.

Downloads

Published

2018-04-29

How to Cite

Huang, Z., Wu, J., & Van Gool, L. (2018). Building Deep Networks on Grassmann Manifolds. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11725