ComprehendEdit: A Comprehensive Dataset and Evaluation Framework for Multimodal Knowledge Editing

Authors

  • Yaohui Ma Harbin Institute of Technology Pengcheng Laboratory Shenzhen University of Advanced Technology
  • Xiaopeng Hong Harbin Institute of Technology Pengcheng Laboratory
  • Shizhou Zhang Northwestern Polytechnical University
  • Huiyun Li Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences Guangdong Provincial Key Laboratory of Computility Microelectronics Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences
  • Zhilin Zhu Harbin Institute of Technology Pengcheng Laboratory
  • Wei Luo Shenzhen University of Advanced Technology
  • Zhiheng Ma Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences Guangdong Provincial Key Laboratory of Computility Microelectronics Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v39i18.34127

Abstract

Large multimodal language models (MLLMs) have revolutionized natural language processing and visual understanding, but often contain outdated or inaccurate information. Current multimodal knowledge editing evaluations are limited in scope and potentially biased, focusing on narrow tasks and failing to assess the impact on in-domain samples. To address these issues, we introduce ComprehendEdit, a comprehensive benchmark comprising eight diverse tasks from multiple datasets. We propose two novel metrics: Knowledge Generalization Index (KGI) and Knowledge Preservation Index (KPI), which evaluate editing effects on in-domain samples without relying on AI-synthetic samples. Based on insights from our framework, we establish Hierarchical In-Context Editing (HICE), a baseline method employing a two-stage approach that balances performance across all metrics. This study provides a more comprehensive evaluation framework for multimodal knowledge editing, reveals unique challenges in this field, and offers a baseline method demonstrating improved performance. Our work opens new perspectives for future research and provides a foundation for developing more robust and effective editing techniques for MLLMs.

Published

2025-04-11

How to Cite

Ma, Y., Hong, X., Zhang, S., Li, H., Zhu, Z., Luo, W., & Ma, Z. (2025). ComprehendEdit: A Comprehensive Dataset and Evaluation Framework for Multimodal Knowledge Editing. Proceedings of the AAAI Conference on Artificial Intelligence, 39(18), 19323–19331. https://doi.org/10.1609/aaai.v39i18.34127

Issue

Section

AAAI Technical Track on Machine Learning IV