Generative Partial Visual-Tactile Fused Object Clustering

Authors

  • Tao Zhang State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences Institutes for Robotics and Intelligent Manufacturing Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Yang Cong State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences
  • Gan Sun State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences
  • Jiahua Dong State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences Institutes for Robotics and Intelligent Manufacturing Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Yuyang Liu State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences Institutes for Robotics and Intelligent Manufacturing Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Zhengming Ding Department of Computer Science, Tulane University

DOI:

https://doi.org/10.1609/aaai.v35i7.16766

Keywords:

Multimodal Perception & Sensor Fusion

Abstract

Visual-tactile fused sensing for object clustering has achieved significant progresses recently, since the involvement of tactile modality can effectively improve clustering performance. However, the missing data (i.e., partial data) issues always happen due to occlusion and noises during the data collecting process. This issue is not well solved by most existing partial multi-view clustering methods for the heterogeneous modality challenge. Naively employing these methods would inevitably induce a negative effect and further hurt the performance. To solve the mentioned challenges, we propose a Generative Partial Visual-Tactile Fused (i.e., GPVTF) framework for object clustering. More specifically, we first do partial visual and tactile features extraction from the partial visual and tactile data, respectively, and encode the extracted features in modality-specific feature subspaces. A conditional cross-modal clustering generative adversarial network is then developed to synthesize one modality conditioning on the other modality, which can compensate missing samples and align the visual and tactile modalities naturally by adversarial learning. To the end, two pseudo-label based KL-divergence losses are employed to update the corresponding modality-specific encoders. Extensive comparative experiments on three public visual-tactile datasets prove the effectiveness of our method.

Downloads

Published

2021-05-18

How to Cite

Zhang, T., Cong, Y., Sun, G., Dong, J., Liu, Y., & Ding, Z. (2021). Generative Partial Visual-Tactile Fused Object Clustering. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7), 6156-6164. https://doi.org/10.1609/aaai.v35i7.16766

Issue

Section

AAAI Technical Track on Intelligent Robots