MFOS: Model-Free & One-Shot Object Pose Estimation

Authors

  • JongMin Lee Seoul National University
  • Yohann Cabon Naver Labs Europe
  • Romain Brégier Naver Labs Europe
  • Sungjoo Yoo Seoul National University
  • Jerome Revaud Naver Labs Europe

DOI:

https://doi.org/10.1609/aaai.v38i4.28072

Keywords:

CV: 3D Computer Vision, CV: Applications, CV: Low Level & Physics-based Vision, CV: Vision for Robotics & Autonomous Driving

Abstract

Existing learning-based methods for object pose estimation in RGB images are mostly model-specific or category based. They lack the capability to generalize to new object categories at test time, hence severely hindering their practicability and scalability. Notably, recent attempts have been made to solve this issue, but they still require accurate 3D data of the object surface at both train and test time. In this paper, we introduce a novel approach that can estimate in a single forward pass the pose of objects never seen during training, given minimum input. In contrast to existing state-of-the-art approaches, which rely on task-specific modules, our proposed model is entirely based on a transformer architecture, which can benefit from recently proposed 3D-geometry general pretraining. We conduct extensive experiments and report state-of-the-art one-shot performance on the challenging LINEMOD benchmark. Finally, extensive ablations allow us to determine good practices with this relatively new type of architecture in the field.

Published

2024-03-24

How to Cite

Lee, J., Cabon, Y., Brégier, R., Yoo, S., & Revaud, J. (2024). MFOS: Model-Free & One-Shot Object Pose Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 2911-2919. https://doi.org/10.1609/aaai.v38i4.28072

Issue

Section

AAAI Technical Track on Computer Vision III