Monte Carlo Diffusion for Generalizable Learning-Based RANSAC

Authors

  • Jiale Wang Xi'an Jiaotong University, China
  • Chen Zhao EPFL, Switzerland
  • Wei Ke Xi'an Jiaotong University, China
  • Tong Zhang University of the Chinese Academy of Sciences, China

DOI:

https://doi.org/10.1609/aaai.v40i12.37954

Abstract

Random Sample Consensus (RANSAC) is a fundamental approach for robustly estimating parametric models from noisy data. Existing learning-based RANSAC methods utilize deep learning to enhance the robustness of RANSAC against outliers. However, these approaches are trained and tested on the data generated by the same algorithms, leading to limited generalization to out-of-distribution data during inference. Therefore, in this paper, we introduce a novel diffusion-based paradigm that progressively injects noise into ground-truth data, simulating the noisy conditions for training learning-based RANSAC. To enhance data diversity, we incorporate Monte Carlo sampling into the diffusion paradigm, approximating diverse data distributions by introducing different types of randomness at multiple stages. We evaluate our approach in the context of feature matching through comprehensive experiments on the ScanNet and MegaDepth datasets. The experimental results demonstrate that our Monte Carlo diffusion mechanism significantly improves the generalization ability of learning-based RANSAC. We also develop extensive ablation studies that highlight the effectiveness of key components in our framework.

Downloads

Published

2026-03-14

How to Cite

Wang, J., Zhao, C., Ke, W., & Zhang, T. (2026). Monte Carlo Diffusion for Generalizable Learning-Based RANSAC. Proceedings of the AAAI Conference on Artificial Intelligence, 40(12), 9894–9902. https://doi.org/10.1609/aaai.v40i12.37954

Issue

Section

AAAI Technical Track on Computer Vision IX