Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation
DOI:
https://doi.org/10.1609/aaai.v39i16.33804Abstract
Prompt instruction tuning is a popular approach to better adjust pretrained LLMs for specific downstream tasks. How to extend this approach to simultaneously handle multiple tasks and data distributions is an interesting question. We propose Mixture of Prompts (MoPs) with smart gating functionality. Our proposed system identifies relevant skills embedded in different groups of prompts and dynamically weighs experts (i.e., collection of prompts) based on the target task. Experiments show that MoPs are resilient to model compression, data source, and task composition, making them highly versatile and applicable in various contexts. In practice, MoPs can simultaneously mitigate prompt training ``interference'' in multi-task, multi-source scenarios (e.g., task and data heterogeneity across sources) and possible implications from model approximations. Empirically, MoPs show particular effectiveness in compressed model scenarios, while maintaining favorable performance in uncompressed settings: MoPs can reduce final perplexity from 9% up to 70% in non-i.i.d. distributed cases and from 3% up to 30% in centralized cases, compared to baselines.Downloads
Published
2025-04-11
How to Cite
Dun, C., Hipolito Garcia, M. D. C., Zheng, G., Awadallah, A. H., Sim, R., & Kyrillidis, A. (2025). Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task Adaptation. Proceedings of the AAAI Conference on Artificial Intelligence, 39(16), 16426–16434. https://doi.org/10.1609/aaai.v39i16.33804
Issue
Section
AAAI Technical Track on Machine Learning II