Random Tensor Theory for Tensor Decomposition

Authors

  • Mohamed Ouerfelli Université Paris Saclay CEA Paris-Saclay
  • Mohamed Tamaazousti CEA Saclay
  • Vincent Rivasseau Université Paris-Saclay

DOI:

https://doi.org/10.1609/aaai.v36i7.20761

Keywords:

Machine Learning (ML)

Abstract

We propose a new framework for tensor decomposition based on trace invariants, which are particular cases of tensor networks. In general, tensor networks are diagrams/graphs that specify a way to "multiply" a collection of tensors together to produce another tensor, matrix or scalar. The particularity of trace invariants is that the operation of multiplying copies of a certain input tensor that produces a scalar obeys specific symmetry constraints. In other words, the scalar resulting from this multiplication is invariant under some specific transformations of the involved tensor. We focus our study on the O(N)-invariant graphs, i.e. invariant under orthogonal transformations of the input tensor. The proposed approach is novel and versatile since it allows to address different theoretical and practical aspects of both CANDECOMP/PARAFAC (CP) and Tucker decomposition models. In particular we obtain several results: (i) we generalize the computational limit of Tensor PCA (a rank-one tensor decomposition) to the case of a tensor with axes of different dimensions (ii) we introduce new algorithms for both decomposition models (iii) we obtain theoretical guarantees for these algorithms and (iv) we show improvements with respect to state of the art on synthetic and real data which also highlights a promising potential for practical applications.

Downloads

Published

2022-06-28

How to Cite

Ouerfelli, M., Tamaazousti, M., & Rivasseau, V. (2022). Random Tensor Theory for Tensor Decomposition. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7913-7921. https://doi.org/10.1609/aaai.v36i7.20761

Issue

Section

AAAI Technical Track on Machine Learning II