Self-Supervised Category-Level 6D Object Pose Estimation with Deep Implicit Shape Representation

Authors

  • Wanli Peng Dalian University of Technology
  • Jianhang Yan Dalian University of Technology
  • Hongtao Wen Dalian University of Technology
  • Yi Sun Dalian University of Technology

DOI:

https://doi.org/10.1609/aaai.v36i2.20104

Keywords:

Computer Vision (CV)

Abstract

Category-level 6D pose estimation can be better generalized to unseen objects in a category compared with instance-level 6D pose estimation. However, existing category-level 6D pose estimation methods usually require supervised training with a sufficient number of 6D pose annotations of objects which makes them difficult to be applied in real scenarios. To address this problem, we propose a self-supervised framework for category-level 6D pose estimation in this paper. We leverage DeepSDF as a 3D object representation and design several novel loss functions based on DeepSDF to help the self-supervised model predict unseen object poses without any 6D object pose labels and explicit 3D models in real scenarios. Experiments demonstrate that our method achieves comparable performance with the state-of-the-art fully supervised methods on the category-level NOCS benchmark.

Downloads

Published

2022-06-28

How to Cite

Peng, W., Yan, J., Wen, H., & Sun, Y. (2022). Self-Supervised Category-Level 6D Object Pose Estimation with Deep Implicit Shape Representation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 2082-2090. https://doi.org/10.1609/aaai.v36i2.20104

Issue

Section

AAAI Technical Track on Computer Vision II