Unleashing the Power of Image-Tabular Self-Supervised Learning via Breaking Cross-Tabular Barriers

Authors

  • Yibing Fu Department of Biomedical Engineering, National University of Singapore
  • Yunpeng Zhao Department of Biomedical Engineering, National University of Singapore
  • Zhitao Zeng Department of Biomedical Engineering, National University of Singapore
  • Cheng Chen Department of Electrical and Electronic Engineering, The University of Hong Kong School of Biomedical Engineering, The University of Hong Kong
  • Yueming Jin Department of Biomedical Engineering, National University of Singapore Department of Electrical and Computer Engineering, National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v40i5.37408

Abstract

Multi-modal learning integrating medical images and tabular data has significantly advanced clinical decision-making in recent years. Self-Supervised Learning (SSL) has emerged as a powerful paradigm for pretraining these models on large-scale unlabeled image-tabular data, aiming to learn discriminative representations. However, existing SSL methods for image-tabular representation learning are often confined to specific data cohorts, mainly due to their rigid tabular modeling mechanisms when modeling heterogeneous tabular data. This inter-tabular barrier hinders the multi-modal SSL methods from effectively learning transferrable medical knowledge shared across diverse cohorts. In this paper, we propose a novel SSL framework, namely CITab, designed to learn powerful multi-modal feature representations in a cross-tabular manner. We design the tabular modeling mechanism from a semantic-awareness perspective by integrating column headers as semantic cues, which facilitates transferrable knowledge learning and the scalability in utilizing multiple data sources for pretraining. Additionally, we propose a prototype-guided mixture-of-linear layer (P-MoLin) module for tabular feature specialization, empowering the model to effectively handle the heterogeneity of tabular data and explore the underlying medical concepts. We conduct comprehensive evaluations on Alzheimer's disease diagnosis task across three publicly available data cohorts containing 4,461 subjects. Experimental results demonstrate that CITab outperforms state-of-the-art approaches, paving the way for effective and scalable cross-tabular multi-modal learning.

Downloads

Published

2026-03-14

How to Cite

Fu, Y., Zhao, Y., Zeng, Z., Chen, C., & Jin, Y. (2026). Unleashing the Power of Image-Tabular Self-Supervised Learning via Breaking Cross-Tabular Barriers. Proceedings of the AAAI Conference on Artificial Intelligence, 40(5), 4049-4057. https://doi.org/10.1609/aaai.v40i5.37408

Issue

Section

AAAI Technical Track on Computer Vision II