Efficient Learning of PDEs via Taylor Expansion and Sparse Decomposition into Value and Fourier Domains

Authors

  • Md Nasim Purdue University
  • Yexiang Xue Purdue University

DOI:

https://doi.org/10.1609/aaai.v38i13.29356

Keywords:

ML: Scalability of ML Systems, APP: Natural Sciences

Abstract

Accelerating the learning of Partial Differential Equations (PDEs) from experimental data will speed up the pace of scientific discovery. Previous randomized algorithms exploit sparsity in PDE updates for acceleration. However such methods are applicable to a limited class of decomposable PDEs, which have sparse features in the value domain. We propose Reel, which accelerates the learning of PDEs via random projection and has much broader applicability. Reel exploits the sparsity by decomposing dense updates into sparse ones in both the value and frequency domains. This decomposition enables efficient learning when the source of the updates consists of gradually changing terms across large areas (sparse in the frequency domain) in addition to a few rapid updates concentrated in a small set of “interfacial” regions (sparse in the value domain). Random projection is then applied to compress the sparse signals for learning. To expand the model applicability, Taylor series expansion is used in Reel to approximate the nonlinear PDE updates with polynomials in the decomposable form. Theoretically, we derive a constant factor approximation between the projected loss function and the original one with poly-logarithmic number of projected dimensions. Experimentally, we provide empirical evidence that our proposed Reel can lead to faster learning of PDE models (70-98% reduction in training time when the data is compressed to 1% of its original size) with comparable quality as the non-compressed models.

Published

2024-03-24

How to Cite

Nasim, M., & Xue, Y. (2024). Efficient Learning of PDEs via Taylor Expansion and Sparse Decomposition into Value and Fourier Domains. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14422-14430. https://doi.org/10.1609/aaai.v38i13.29356

Issue

Section

AAAI Technical Track on Machine Learning IV