Learning a Gradient-free Riemannian Optimizer on Tangent Spaces
Keywords:Learning with Manifolds, Optimization
AbstractA principal way of addressing constrained optimization problems is to model them as problems on Riemannian manifolds. Recently, Riemannian meta-optimization provides a promising way for solving constrained optimization problems by learning optimizers on Riemannian manifolds in a data-driven fashion, making it possible to design task-specific constrained optimizers. A close look at the Riemannian meta-optimization reveals that learning optimizers on Riemannian manifolds needs to differentiate through the nonlinear Riemannian optimization, which is complex and computationally expensive. In this paper, we propose a simple yet efficient Riemannian meta-optimization method that learns to optimize on tangent spaces of manifolds. In doing so, we present a gradient-free optimizer on tangent spaces, which takes parameters of the model along with the training data as inputs, and generates the updated parameters directly. As a result, the constrained optimization is transformed from Riemannian manifolds to tangent spaces where complex Riemannian operations (e.g., retraction operations) are removed from the optimizer, and learning the optimizer does not need to differentiate through the Riemannian optimization. We empirically show that our method brings efficient learning of the optimizer, while enjoying a good optimization trajectory in a data-driven manner.
How to Cite
Fan, X., Gao, Z., Wu, Y., Jia, Y., & Harandi, M. (2021). Learning a Gradient-free Riemannian Optimizer on Tangent Spaces. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7377-7384. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16905
AAAI Technical Track on Machine Learning I