Accelerating Continuous Normalizing Flow with Trajectory Polynomial Regularization

Authors

  • Han-Hsien Huang Academia Sinica Texas A&M University
  • Mi-Yen Yeh Academia Sinica

DOI:

https://doi.org/10.1609/aaai.v35i9.16956

Keywords:

Neural Generative Models & Autoencoders

Abstract

In this paper, we propose an approach to effectively accelerating the computation of continuous normalizing flow (CNF), which has been proven to be a powerful tool for the tasks such as variational inference and density estimation. The training time cost of CNF can be extremely high because the required number of function evaluations (NFE) for solving corresponding ordinary differential equations (ODE) is very large. We think that the high NFE results from large truncation errors of solving ODEs. To address the problem, we propose to add a regularization. The regularization penalizes the difference between the trajectory of the ODE and its fitted polynomial regression. The trajectory of ODE will approximate a polynomial function, and thus the truncation error will be smaller. Furthermore, we provide two proofs and claim that the additional regularization does not harm training quality. Experimental results show that our proposed method can result in 42.3% to 71.3% reduction of NFE on the task of density estimation, and 19.3% to 32.1% reduction of NFE on variational auto-encoder, while the testing losses are not affected.

Downloads

Published

2021-05-18

How to Cite

Huang, H.-H., & Yeh, M.-Y. (2021). Accelerating Continuous Normalizing Flow with Trajectory Polynomial Regularization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7832-7839. https://doi.org/10.1609/aaai.v35i9.16956

Issue

Section

AAAI Technical Track on Machine Learning II