OctAttention: Octree-Based Large-Scale Contexts Model for Point Cloud Compression

Authors

  • Chunyang Fu School of Electronic and Computer Engineering, Peking University Shenzhen Graduate School Peng Cheng Laboratory
  • Ge Li School of Electronic and Computer Engineering, Peking University Shenzhen Graduate School
  • Rui Song School of Electronic and Computer Engineering, Peking University Shenzhen Graduate School
  • Wei Gao School of Electronic and Computer Engineering, Peking University Shenzhen Graduate School Peng Cheng Laboratory
  • Shan Liu Tencent America

DOI:

https://doi.org/10.1609/aaai.v36i1.19942

Keywords:

Computer Vision (CV)

Abstract

In point cloud compression, sufficient contexts are significant for modeling the point cloud distribution. However, the contexts gathered by the previous voxel-based methods decrease when handling sparse point clouds. To address this problem, we propose a multiple-contexts deep learning framework called OctAttention employing the octree structure, a memory-efficient representation for point clouds. Our approach encodes octree symbol sequences in a lossless way by gathering the information of sibling and ancestor nodes. Expressly, we first represent point clouds with octree to reduce spatial redundancy, which is robust for point clouds with different resolutions. We then design a conditional entropy model with a large receptive field that models the sibling and ancestor contexts to exploit the strong dependency among the neighboring nodes and employ an attention mechanism to emphasize the correlated nodes in the context. Furthermore, we introduce a mask operation during training and testing to make a trade-off between encoding time and performance. Compared to the previous state-of-the-art works, our approach obtains a 10%-35% BD-Rate gain on the LiDAR benchmark (e.g. SemanticKITTI) and object point cloud dataset (e.g. MPEG 8i, MVUB), and saves 95% coding time compared to the voxel-based baseline. The code is available at https://github.com/zb12138/OctAttention.

Downloads

Published

2022-06-28

How to Cite

Fu, C., Li, G., Song, R., Gao, W., & Liu, S. (2022). OctAttention: Octree-Based Large-Scale Contexts Model for Point Cloud Compression. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 625-633. https://doi.org/10.1609/aaai.v36i1.19942

Issue

Section

AAAI Technical Track on Computer Vision I