Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine

Authors

  • Xiaoshuang Huang Baidu Inc China Agricultural University
  • Lingdong Shen Institute of Automation, Chinese Academy of Sciences
  • Jia Liu Baidu Inc
  • Fangxin Shang Baidu Inc
  • Hongxiang Li Peking University
  • Haifeng Huang Baidu Inc
  • Yehui Yang Baidu Inc

DOI:

https://doi.org/10.1609/aaai.v39i4.32394

Abstract

In recent years, Multimodal Large Language Models (MLLM) have achieved notable advancements, demonstrating the feasibility of developing an intelligent biomedical assistant. However, current biomedical MLLMs predominantly focus on image-level understanding and restrict interactions to textual commands, thus limiting their capability boundaries and the flexibility of usage. In this paper, we introduce a novel end-to-end multimodal large language model for the biomedical domain, named MedPLIB, which possesses pixel-level understanding. Excitingly, it supports visual question answering (VQA), arbitrary pixel-level prompts (points, bounding boxes, and free-form shapes), and pixel-level grounding. We propose a novel Mixture-of-Experts (MoE) multi-stage training strategy, which divides MoE into separate training phases for a visual-language expert model and a pixel-grounding expert model, followed by fine-tuning using MoE. This strategy effectively coordinates multitask learning while maintaining the computational cost at inference equivalent to that of a single expert model. To advance the research of biomedical MLLMs, we introduce the Medical Complex Vision Question Answering Dataset (MeCoVQA), which comprises an array of 8 modalities for complex medical imaging question answering and image region understanding. Experimental results indicate that MedPLIB has achieved state-of-the-art outcomes across multiple medical visual language tasks. More importantly, in zero-shot evaluations for the pixel grounding task, MedPLIB leads the best small and large models by margins of 19.7 and 15.6 respectively on the mDice metric.

Downloads

Published

2025-04-11

How to Cite

Huang, X., Shen, L., Liu, J., Shang, F., Li, H., Huang, H., & Yang, Y. (2025). Towards a Multimodal Large Language Model with Pixel-Level Insight for Biomedicine. Proceedings of the AAAI Conference on Artificial Intelligence, 39(4), 3779–3787. https://doi.org/10.1609/aaai.v39i4.32394

Issue

Section

AAAI Technical Track on Computer Vision III