Mx2M: Masked Cross-Modality Modeling in Domain Adaptation for 3D Semantic Segmentation

Authors

  • Boxiang Zhang Jilin University
  • Zunran Wang Tencent Robotics X
  • Yonggen Ling Tencent Robotics X
  • Yuanyuan Guan Jilin University
  • Shenghao Zhang Tencent Robotics X
  • Wenhui Li Jilin University

DOI:

https://doi.org/10.1609/aaai.v37i3.25448

Keywords:

CV: Segmentation, CV: 3D Computer Vision, CV: Multi-modal Vision, CV: Representation Learning for Vision, CV: Scene Analysis & Understanding, CV: Vision for Robotics & Autonomous Driving, ML: Applications

Abstract

Existing methods of cross-modal domain adaptation for 3D semantic segmentation predict results only via 2D-3D complementarity that is obtained by cross-modal feature matching. However, as lacking supervision in the target domain, the complementarity is not always reliable. The results are not ideal when the domain gap is large. To solve the problem of lacking supervision, we introduce masked modeling into this task and propose a method Mx2M, which utilizes masked cross-modality modeling to reduce the large domain gap. Our Mx2M contains two components. One is the core solution, cross-modal removal and prediction (xMRP), which makes the Mx2M adapt to various scenarios and provides cross-modal self-supervision. The other is a new way of cross-modal feature matching, the dynamic cross-modal filter (DxMF) that ensures the whole method dynamically uses more suitable 2D-3D complementarity. Evaluation of the Mx2M on three DA scenarios, including Day/Night, USA/Singapore, and A2D2/SemanticKITTI, brings large improvements over previous methods on many metrics.

Downloads

Published

2023-06-26

How to Cite

Zhang, B., Wang, Z., Ling, Y., Guan, Y., Zhang, S., & Li, W. (2023). Mx2M: Masked Cross-Modality Modeling in Domain Adaptation for 3D Semantic Segmentation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3401-3409. https://doi.org/10.1609/aaai.v37i3.25448

Issue

Section

AAAI Technical Track on Computer Vision III