Cross-Modal Unlearning via Influential Neuron Path Editing in Multimodal Large Language Models

Authors

  • Kunhao Li South China University of Technology
  • Wenhao Li South China University of Technology
  • Di Wu La Trobe University
  • Lei Yang South China University of Technology
  • Jun Bai McGill University
  • Ju Jia Southeast University
  • Jason Xue Data61, CSIRO

DOI:

https://doi.org/10.1609/aaai.v40i42.40870

Abstract

Multimodal Large Language Models (MLLMs) extend foundation models to real-world applications by integrating inputs such as text and vision. However, their broad knowledge capacity raises growing concerns about privacy leakage, toxicity mitigation, and intellectual property violations. Machine Unlearning (MU) offers a practical solution by selectively forgetting targeted knowledge while preserving overall model utility. When applied to MLLMs, existing neuron-editing-based MU approaches face two fundamental challenges: (i) forgetting becomes inconsistent across modalities because existing point-wise attribution methods fail to capture the structured, layer-by-layer information flow that connects different modalities; and (ii) general knowledge performance declines when sensitive neurons that also support important reasoning paths are pruned, as this disrupts the model’s ability to generalize. To alleviate these limitations, we propose a multimodal influential neuron path editor (MIP-Editor) for MU. Our approach introduces modality-specific attribution scores to identify influential neuron paths responsible for encoding forget-set knowledge and applies influential-path-aware neuron-editing via representation misdirection. This strategy also enables effective and coordinated forgetting across modalities while preserving the model's general capabilities. Experimental results demonstrate that MIP-Editor achieves a superior unlearning performance on multimodal tasks, with a maximum forgetting rate of 87.75% and up to 54.26% improvement in general knowledge retention. On textual tasks, MIP-Editor achieves up to 80.65% forgetting and preserves 77.90% of general performance.

Published

2026-03-14

How to Cite

Li, K., Li, W., Wu, D., Yang, L., Bai, J., Jia, J., & Xue, J. (2026). Cross-Modal Unlearning via Influential Neuron Path Editing in Multimodal Large Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 40(42), 35589–35597. https://doi.org/10.1609/aaai.v40i42.40870

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI