Collaborative Representation Learning for Alignment of Tactile, Language, and Vision Modalities

Authors

  • Yiyun Zhou Zhejiang University
  • Mingjing Xu Swansea University
  • Jingwei Shi Shanghai University of Finance and Economics
  • Quanjiang Li National University of Defense Technology
  • Jingyuan Chen Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v40i22.38956

Abstract

Tactile sensing offers rich and complementary information to vision and language, enabling robots to perceive fine-grained object properties. However, existing tactile sensors lack standardization, leading to redundant features that hinder cross-sensor generalization. Moreover, existing methods fail to fully integrate the intermediate communication among tactile, language, and vision modalities. To address this, we propose TLV-CoRe, a CLIP-based Tactile-Language-Vision Collaborative Representation learning method. TLV-CoRe introduces a Sensor-Aware Modulator to unify tactile features across different sensors and employs tactile-irrelevant decoupled learning to disentangle irrelevant tactile features. Additionally, a Unified Bridging Adapter is introduced to enhance tri-modal interaction within the shared representation space. To fairly evaluate the effectiveness of tactile models, we further propose the RSS evaluation framework, focusing on Robustness, Synergy, and Stability across different methods. Experimental results demonstrate that TLV-CoRe significantly improves sensor-agnostic representation learning and cross-modal alignment, offering a new direction for multimodal tactile representation.

Downloads

Published

2026-03-14

How to Cite

Zhou, Y., Xu, M., Shi, J., Li, Q., & Chen, J. (2026). Collaborative Representation Learning for Alignment of Tactile, Language, and Vision Modalities. Proceedings of the AAAI Conference on Artificial Intelligence, 40(22), 18864–18872. https://doi.org/10.1609/aaai.v40i22.38956

Issue

Section

AAAI Technical Track on Intelligent Robotics