X4D-SceneFormer: Enhanced Scene Understanding on 4D Point Cloud Videos through Cross-Modal Knowledge Transfer

Authors

  • Linglin Jing Shanghai AI laboratory Department of Computer Science, Loughborough University
  • Ying Xue FNii, CUHK-Shenzhen SSE, CUHK-Shenzhen
  • Xu Yan FNii, CUHK-Shenzhen SSE, CUHK-Shenzhen
  • Chaoda Zheng FNii, CUHK-Shenzhen SSE, CUHK-Shenzhen
  • Dong Wang Shanghai AI laboratory
  • Ruimao Zhang SSE, CUHK-Shenzhen FNii, CUHK-Shenzhen
  • Zhigang Wang Shanghai AI laboratory
  • Hui Fang Department of Computer Science, Loughborough University
  • Bin Zhao Shanghai AI laboratory
  • Zhen Li SSE, CUHK-Shenzhen FNii, CUHK-Shenzhen

DOI:

https://doi.org/10.1609/aaai.v38i3.28045

Keywords:

CV: 3D Computer Vision, CV: Applications, CV: Multi-modal Vision, CV: Video Understanding & Activity Analysis

Abstract

The field of 4D point cloud understanding is rapidly developing with the goal of analyzing dynamic 3D point cloud sequences. However, it remains a challenging task due to the sparsity and lack of texture in point clouds. Moreover, the irregularity of point cloud poses a difficulty in aligning temporal information within video sequences. To address these issues, we propose a novel cross-modal knowledge transfer framework, called X4D-SceneFormer. This framework enhances 4D-Scene understanding by transferring texture priors from RGB sequences using a Transformer architecture with temporal relationship mining. Specifically, the framework is designed with a dual-branch architecture, consisting of an 4D point cloud transformer and a Gradient-aware Image Transformer (GIT). The GIT combines visual texture and temporal correlation features to offer rich semantics and dynamics for better point cloud representation. During training, we employ multiple knowledge transfer techniques, including temporal consistency losses and masked self-attention, to strengthen the knowledge transfer between modalities. This leads to enhanced performance during inference using single-modal 4D point cloud inputs. Extensive experiments demonstrate the superior performance of our framework on various 4D point cloud video understanding tasks, including action recognition, action segmentation and semantic segmentation. The results achieve 1st places, i.e., 85.3% (+7.9%) accuracy and 47.3% (+5.0%) mIoU for 4D action segmentation and semantic segmentation, on the HOI4D challenge, outperforming previous state-of-the-art by a large margin. We release the code at https://github.com/jinglinglingling/X4D.

Published

2024-03-24

How to Cite

Jing, L., Xue, Y., Yan, X., Zheng, C., Wang, D., Zhang, R., … Li, Z. (2024). X4D-SceneFormer: Enhanced Scene Understanding on 4D Point Cloud Videos through Cross-Modal Knowledge Transfer. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2670–2678. https://doi.org/10.1609/aaai.v38i3.28045

Issue

Section

AAAI Technical Track on Computer Vision II