patchDPCC: A Patchwise Deep Compression Framework for Dynamic Point Clouds
DOI:
https://doi.org/10.1609/aaai.v38i5.28238Keywords:
CV: 3D Computer Vision, APP: Other Applications, CV: ApplicationsAbstract
When compressing point clouds, point-based deep learning models operate points in a continuous space, which has a chance to minimize the geometric fidelity loss introduced by voxelization in preprocessing. But these methods could hardly scale to inputs with arbitrary points. Furthermore, the point cloud frames are individually compressed, failing the conventional wisdom of leveraging inter-frame similarity. In this work, we propose a patchwise compression framework called patchDPCC, which consists of a patch group generation module and a point-based compression model. Algorithms are developed to generate patches from different frames representing the same object, and more importantly, these patches are regulated to have the same number of points. We also incorporate a feature transfer module in the compression model, which refines the feature quality by exploiting the inter-frame similarity. Our model generates point-wise features for entropy coding, which guarantees the reconstruction speed. The evaluation on the MPEG 8i dataset shows that our method improves the compression ratio by 47.01% and 85.22% when compared to PCGCv2 and V-PCC with the same reconstruction quality, which is 9% and 16% better than that D-DPCC does. Our method also achieves the fastest decoding speed among the learning-based compression models.Downloads
Published
2024-03-24
How to Cite
Pan, Z., Xiao, M., Han, X., Yu, D., Zhang, G., & Liu, Y. (2024). patchDPCC: A Patchwise Deep Compression Framework for Dynamic Point Clouds. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4406-4414. https://doi.org/10.1609/aaai.v38i5.28238
Issue
Section
AAAI Technical Track on Computer Vision IV