Riemannian Local Mechanism for SPD Neural Networks

Authors

  • Ziheng Chen Jiangnan University
  • Tianyang Xu Jiangnan University
  • Xiao-Jun Wu Jiangnan University
  • Rui Wang Jiangnan University
  • Zhiwu Huang Singapore Management University
  • Josef Kittler University of Surrey

DOI:

https://doi.org/10.1609/aaai.v37i6.25867

Keywords:

ML: Deep Neural Network Algorithms, CV: Representation Learning for Vision, ML: Deep Learning Theory, ML: Matrix & Tensor Methods

Abstract

The Symmetric Positive Definite (SPD) matrices have received wide attention for data representation in many scientific areas. Although there are many different attempts to develop effective deep architectures for data processing on the Riemannian manifold of SPD matrices, very few solutions explicitly mine the local geometrical information in deep SPD feature representations. Given the great success of local mechanisms in Euclidean methods, we argue that it is of utmost importance to ensure the preservation of local geometric information in the SPD networks. We first analyse the convolution operator commonly used for capturing local information in Euclidean deep networks from the perspective of a higher level of abstraction afforded by category theory. Based on this analysis, we define the local information in the SPD manifold and design a multi-scale submanifold block for mining local geometry. Experiments involving multiple visual tasks validate the effectiveness of our approach.

Downloads

Published

2023-06-26

How to Cite

Chen, Z., Xu, T., Wu, X.-J., Wang, R., Huang, Z., & Kittler, J. (2023). Riemannian Local Mechanism for SPD Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 7104-7112. https://doi.org/10.1609/aaai.v37i6.25867

Issue

Section

AAAI Technical Track on Machine Learning I