Mixture Manifold Networks: A Computationally Efficient Baseline for Inverse Modeling

Authors

  • Gregory P. Spell Duke University
  • Simiao Ren Duke University
  • Leslie M. Collins Duke University
  • Jordan M. Malof University of Montana

DOI:

https://doi.org/10.1609/aaai.v37i8.26178

Keywords:

ML: Applications, ML: Deep Neural Network Algorithms, APP: Design

Abstract

We propose and show the efficacy of a new method to address generic inverse problems. Inverse modeling is the task whereby one seeks to determine the hidden parameters of a natural system that produce a given set of observed measurements. Recent work has shown impressive results using deep learning, but we note that there is a trade-off between model performance and computational time. For some applications, the computational time at inference for the best performing inverse modeling method may be overly prohibitive to its use. In seeking a faster, high-performing model, we present a new method that leverages multiple manifolds as a mixture of backward (e.g., inverse) models in a forward-backward model architecture. These multiple backwards models all share a common forward model, and their training is mitigated by generating training examples from the forward model. The proposed method thus has two innovations: 1) the multiple Manifold Mixture Network (MMN) architecture, and 2) the training procedure involving augmenting backward model training data using the forward model. We demonstrate the advantages of our method by comparing to several baselines on four benchmark inverse problems, and we furthermore provide analysis to motivate its design.

Downloads

Published

2023-06-26

How to Cite

Spell, G. P., Ren, S., Collins, L. M., & Malof, J. M. (2023). Mixture Manifold Networks: A Computationally Efficient Baseline for Inverse Modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9874-9881. https://doi.org/10.1609/aaai.v37i8.26178

Issue

Section

AAAI Technical Track on Machine Learning III