Where and What Matters: Sensitivity-Aware Task Vectors for Many-Shot Multimodal In-Context Learning

Authors

  • Ziyu Ma AMAP, Alibaba Group, China
  • Chenhui Gou Data Science & AI Department, Faculty of IT, Monash University, Australia
  • Yiming Hu AMAP, Alibaba Group, China
  • Yong Wang AMAP, Alibaba Group, China
  • Bohan Zhuang ZIP Lab, Zhejiang University, China
  • Jianfei Cai Data Science & AI Department, Faculty of IT, Monash University, Australia

DOI:

https://doi.org/10.1609/aaai.v40i10.37733

Abstract

Large Multimodal Models (LMMs) have shown promising in-context learning (ICL) capabilities, but scaling to many-shot settings remains difficult due to limited context length and high inference cost. To address these challenges, task-vector-based methods have been explored by inserting compact representations of many-shot in-context demonstrations into model activations. However, existing task-vector-based methods either overlook the importance of where to insert task vectors or struggle to determine suitable values for each location. To this end, we propose a novel Sensitivity-aware Task Vector insertion framework (STV) to figure out where and what to insert. Our key insight is that activation deltas across query-context pairs exhibit consistent structural patterns, providing a reliable cue for insertion. Based on the identified sensitive-aware locations, we construct a pre-clustered activation bank for each location by clustering the activation values, and then apply reinforcement learning to choose the most suitable one to insert. We evaluate STV across a range of multimodal models (e.g., Qwen-VL, Idefics-2) and tasks (e.g., VizWiz, OK-VQA), demonstrating its effectiveness and showing consistent improvements over previous task-vector-based methods with strong generalization.

Downloads

Published

2026-03-14

How to Cite

Ma, Z., Gou, C., Hu, Y., Wang, Y., Zhuang, B., & Cai, J. (2026). Where and What Matters: Sensitivity-Aware Task Vectors for Many-Shot Multimodal In-Context Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(10), 7892-7900. https://doi.org/10.1609/aaai.v40i10.37733

Issue

Section

AAAI Technical Track on Computer Vision VII