Exploring Transferability of Self-Supervised Learning by Task Conflict Calibration

Authors

  • Huijie Guo Institute of Software Chinese Academy of Sciences
  • Jingyao Wang Institute of Software Chinese Academy of Sciences University of the Chinese Academy of Sciences
  • Peizheng Guo Institute of Software Chinese Academy of Sciences University of the Chinese Academy of Sciences
  • Xingchen Shen Institute of Software Chinese Academy of Sciences
  • Changwen Zheng Institute of Software Chinese Academy of Sciences University of the Chinese Academy of Sciences
  • Wenwen Qiang Institute of Software Chinese Academy of Sciences University of the Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v40i26.39291

Abstract

In this paper, we explore the transferability of SSL by addressing two central questions: (i) what is the representation transferability of SSL, and (ii) how can we effectively model this transferability? Transferability is defined as the ability of a representation learned from one task to support the objective of another. Inspired by the meta-learning paradigm, we construct multiple SSL tasks within each training batch to support explicitly modeling transferability. Based on empirical evidence and causal analysis, we find that although introducing task-level information improves transferability, it is still hindered by task conflict. To address this issue, we propose a Task Conflict Calibration method to alleviate the impact of task conflict. Specifically, it first splits batches to create multiple SSL tasks, infusing task-level information. Next, it uses a factor extraction network to produce causal generative factors for all tasks and a weight extraction network to assign dedicated weights to each sample, employing data reconstruction, orthogonality, and sparsity to ensure effectiveness. Finally, the method calibrates sample representations during SSL training and integrates into the pipeline via a two-stage bi-level optimization framework to boost the transferability of learned representations. Experimental results on multiple downstream tasks demonstrate that our method consistently improves the transferability of SSL models.

Downloads

Published

2026-03-14

How to Cite

Guo, H., Wang, J., Guo, P., Shen, X., Zheng, C., & Qiang, W. (2026). Exploring Transferability of Self-Supervised Learning by Task Conflict Calibration. Proceedings of the AAAI Conference on Artificial Intelligence, 40(26), 21441–21449. https://doi.org/10.1609/aaai.v40i26.39291

Issue

Section

AAAI Technical Track on Machine Learning III