Internal Activation Revision: Safeguarding Vision Language Models Without Parameter Update

Authors

  • Qing Li Mohamed bin Zayed University of Artificial Intelligence
  • Jiahui Geng Mohamed bin Zayed University of Artificial Intelligence
  • Derui Zhu Technical University Munich
  • Zongxiong Chen Fraunhofer FOKUS
  • Kun Song Mohamed bin Zayed University of Artificial Intelligence
  • Lei Ma The University of Tokyo University of Alberta
  • Fakhri Karray Mohamed bin Zayed University of Artificial Intelligence

DOI:

https://doi.org/10.1609/aaai.v39i26.34954

Abstract

Warning: This paper contains offensive content that may disturb some readers. Vision-language models (VLMs) demonstrate strong multimodal capabilities but have been found to be more susceptible to generating harmful content compared to their backbone large language models (LLMs). Our investigation reveals that the integration of images significantly shifts the model's internal activations during the forward pass, diverging from those triggered by textual input. Moreover, the safety alignments of LLMs embedded within VLMs are not sufficiently robust to handle the activations discrepancies, making the models vulnerable to even the simplest jailbreaking attacks. To address this issue, we propose an internal activation revision approach that efficiently revises activations during generation, steering the model toward safer outputs. Our framework incorporates revisions at both the layer and head levels, offering control over the model's generation at varying levels of granularity. In addition, we explore three strategies for constructing positive and negative samples and two approaches for extracting revision vectors, resulting in different variants of our method. Comprehensive experiments demonstrate that the internal activation revision method significantly improves the safety of widely used VLMs, reducing attack success rates by an average of 48.94%, 34.34%, 43.92%, and 52.98% on SafeBench, Safe-Unsafe, Unsafe, and MM-SafetyBench, respectively, while minimally impacting model helpfulness.

Published

2025-04-11

How to Cite

Li, Q., Geng, J., Zhu, D., Chen, Z., Song, K., Ma, L., & Karray, F. (2025). Internal Activation Revision: Safeguarding Vision Language Models Without Parameter Update. Proceedings of the AAAI Conference on Artificial Intelligence, 39(26), 27428–27436. https://doi.org/10.1609/aaai.v39i26.34954

Issue

Section

AAAI Technical Track on AI Alignment