MultiMedBench: A Scenario-Aware Benchmark for Evaluating Knowledge Editing in Medical VQA
DOI:
https://doi.org/10.1609/aaai.v40i40.40679Abstract
Knowledge editing (KE) provides a scalable approach for updating factual knowledge in large language models without full retraining. While previous studies have demonstrated effectiveness in general domains and medical QA tasks, little attention has been paid to KE in multimodal medical scenarios. Unlike text-only settings, medical KE demands integrating updated knowledge with visual reasoning to support safe and interpretable clinical decisions. To address this gap, we propose MultiMedBench, the first benchmark tailored to evaluating KE in clinical multimodal tasks. Our framework spans both understanding and reasoning task types, defines a three-dimensional metric suite (reliability, generality, and locality), and supports cross-paradigm comparisons across general and domain-specific models. We conduct extensive experiments under single-editing and lifelong-editing settings. Results suggest that current methods struggle with generalization and long-tail reasoning, particularly in complex clinical workflows. We further present an efficiency analysis (e.g., edit latency, memory footprint), revealing practical trade-offs in real-world deployment across KE paradigms. Overall, MultiMedBench not only reveals the limitations of current approaches but also provides a solid foundation for developing clinically robust knowledge editing techniques in the future.Downloads
Published
2026-03-14
How to Cite
Wen, S., Chen, H., Wang, Y., Pan, Z., Chen, X., Tian, Y., … Huang, S.-J. (2026). MultiMedBench: A Scenario-Aware Benchmark for Evaluating Knowledge Editing in Medical VQA. Proceedings of the AAAI Conference on Artificial Intelligence, 40(40), 33872–33880. https://doi.org/10.1609/aaai.v40i40.40679
Issue
Section
AAAI Technical Track on Natural Language Processing V