MmAP: Multi-Modal Alignment Prompt for Cross-Domain Multi-Task Learning

Authors

  • Yi Xin Nanjing University Tencent Youtu Lab
  • Junlong Du Tencent Youtu Lab
  • Qiang Wang Tencent Youtu Lab
  • Ke Yan Tencent Youtu Lab
  • Shouhong Ding Tencent Youtu Lab

DOI:

https://doi.org/10.1609/aaai.v38i14.29540

Keywords:

ML: Transfer, Domain Adaptation, Multi-Task Learning

Abstract

Multi-Task Learning (MTL) is designed to train multiple correlated tasks simultaneously, thereby enhancing the performance of individual tasks. Typically, a multi-task network structure consists of a shared backbone and task-specific decoders. However, the complexity of the decoders increases with the number of tasks. To tackle this challenge, we integrate the decoder-free vision-language model CLIP, which exhibits robust zero-shot generalization capability. Recently, parameter-efficient transfer learning methods have been extensively explored with CLIP for adapting to downstream tasks, where prompt tuning showcases strong potential. Nevertheless, these methods solely fine-tune a single modality (text or visual), disrupting the modality structure of CLIP. In this paper, we first propose Multi-modal Alignment Prompt (MmAP) for CLIP, which aligns text and visual modalities during fine-tuning process. Building upon MmAP, we develop an innovative multi-task prompt learning framework. On the one hand, to maximize the complementarity of tasks with high similarity, we utilize a gradient-driven task grouping method that partitions tasks into several disjoint groups and assign a group-shared MmAP to each group. On the other hand, to preserve the unique characteristics of each task, we assign an task-specific MmAP to each task. Comprehensive experiments on two large multi-task learning datasets demonstrate that our method achieves significant performance improvements compared to full fine-tuning while only utilizing approximately ~ 0.09% of trainable parameters.

Published

2024-03-24

How to Cite

Xin, Y., Du, J., Wang, Q., Yan, K., & Ding, S. (2024). MmAP: Multi-Modal Alignment Prompt for Cross-Domain Multi-Task Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(14), 16076-16084. https://doi.org/10.1609/aaai.v38i14.29540

Issue

Section

AAAI Technical Track on Machine Learning V