LoGIC: Multi-LoRA Guided Importance Consensus for Multi-Task Pruning in Vision Transformers

Authors

  • Yu-Hong Chou National Taiwan University
  • Rui Fang National Taiwan University
  • Hsi-Wen Chen National Taiwan University
  • Ming-Syan Chen National Taiwan University

DOI:

https://doi.org/10.1609/aaai.v40i25.39195

Abstract

Deploying Vision Transformers (ViTs) in real-world multi-task learning remains challenging due to their massive computational costs and the difficulty of pruning shared backbones without harming task performance. Single-task pruning often causes destructive interference by discarding weights critical to other tasks, while existing multi-task pruning strategies remain costly and unscalable for billion-parameter models. We propose Multi-LoRA Guided Importance Consensus (LoGIC), a unified framework for efficient and robust multi-task ViT pruning. LoGIC follows a two-phase procedure: (i) task-consistent pruning of LoRA modules, guided by a task-adaptive gating mechanism that balances shared and task-specific contributions while enforcing structured sparsity for deployment; and (ii) cross-task consensus pruning of the frozen ViT backbone, which retains both universally shared and task-specialized capabilities, enabling aggressive sparsity without sacrificing accuracy. Across five diverse vision benchmarks, LoGIC achieves up to 50% structured sparsity while maintaining competitive accuracy and surpassing all baselines.

Downloads

Published

2026-03-14

How to Cite

Chou, Y.-H., Fang, R., Chen, H.-W., & Chen, M.-S. (2026). LoGIC: Multi-LoRA Guided Importance Consensus for Multi-Task Pruning in Vision Transformers. Proceedings of the AAAI Conference on Artificial Intelligence, 40(25), 20588–20596. https://doi.org/10.1609/aaai.v40i25.39195

Issue

Section

AAAI Technical Track on Machine Learning II