SIAM: Towards Generalizable Articulated Object Modeling via Single Robot-Object Interaction
DOI:
https://doi.org/10.1609/aaai.v40i22.38913Abstract
Articulated object modeling, which represents interconnected rigid bodies with their geometry, part segmentation, articulation tree, and physical properties, is crucial for robotic perception and manipulation. Recently existing methods like SAGCI leverage Interactive Perception (IP) to refine models through robot interaction. However, SAGCI suffers from prior-dependency (requiring initialization), neglects kinematic/dynamic constraints, and generates non-watertight meshes. To overcome these limitations, we propose SIAM, a novel framework for efficient and generalizable Single-Interaction Articulated Modeling. Given an initial point cloud, SIAM first enables minimal robot interaction to trigger object motion. It then precisely segments parts by analyzing point cloud differences pre- and post-interaction. For joint parameter estimation, we introduce an optimization incorporating novel kinematic energy constraints, enhancing physical consistency. Finally, we reconstruct a high-quality, topologically watertight mesh by learning 3D Gaussian Primitives from multi-view RGB-D observations under deformation. Extensive experiments on the PartNet-Mobility benchmark demonstrate state-of-the-art articulation modeling performance. Successful real-world deployment with an xArm robot further validates the framework's practicality and transferability. SIAM achieves accurate, prior-free modeling with significantly reduced interaction cost.Downloads
Published
2026-03-14
How to Cite
Liu, Y., Zhang, L., Wu, D., Zhang, Y., Huang, A., Wang, Z., … Guo, D. (2026). SIAM: Towards Generalizable Articulated Object Modeling via Single Robot-Object Interaction. Proceedings of the AAAI Conference on Artificial Intelligence, 40(22), 18478–18486. https://doi.org/10.1609/aaai.v40i22.38913
Issue
Section
AAAI Technical Track on Intelligent Robotics