KPL: Training-Free Medical Knowledge Mining of Vision-Language Models

Authors

  • Jiaxiang Liu ZJU-Angelalign R&D Center for Intelligence Healthcare, ZJU-UIUC Institute, Zhejiang University, China Zhejiang Key Laboratory of Medical Imaging Artificial Intelligence, Zhejiang University, China
  • Tianxiang Hu ZJU-Angelalign R&D Center for Intelligence Healthcare, ZJU-UIUC Institute, Zhejiang University, China Zhejiang Key Laboratory of Medical Imaging Artificial Intelligence, Zhejiang University, China
  • Jiawei Du Centre for Frontier AI Research (CFAR), Agency for Science, Technology and Research (A*STAR), Singapore Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore
  • Ruiyuan Zhang ZJU-Angelalign R&D Center for Intelligence Healthcare, ZJU-UIUC Institute, Zhejiang University, China
  • Joey Tianyi Zhou Centre for Frontier AI Research (CFAR), Agency for Science, Technology and Research (A*STAR), Singapore Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore
  • Zuozhu Liu ZJU-Angelalign R&D Center for Intelligence Healthcare, ZJU-UIUC Institute, Zhejiang University, China Zhejiang Key Laboratory of Medical Imaging Artificial Intelligence, Zhejiang University, China

DOI:

https://doi.org/10.1609/aaai.v39i18.34075

Abstract

Visual Language Models such as CLIP excel in image recognition due to extensive image-text pre-training. However, applying the CLIP inference in zero-shot classification, particularly for medical image diagnosis, faces challenges due to: 1) the inadequacy of representing image classes solely with single category names; 2) the modal gap between the visual and text spaces generated by CLIP encoders. Despite attempts to enrich disease descriptions with large language models, the lack of class-specific knowledge often leads to poor performance. In addition, empirical evidence suggests that existing proxy learning methods for zero-shot image classification on natural image datasets exhibit instability when applied to medical datasets. To tackle these challenges, we introduce the Knowledge Proxy Learning (KPL) to mine knowledge from CLIP. KPL is designed to leverage CLIP's multimodal understandings for medical image classification through Text Proxy Optimization and Multimodal Proxy Learning. Specifically, KPL retrieves image-relevant knowledge descriptions from the constructed knowledge-enhanced base to enrich semantic text proxies. It then harnesses input images and these descriptions, encoded via CLIP, to stably generate multimodal proxies that boost the zero-shot classification performance. Extensive experiments conducted on both medical and natural image datasets demonstrate that KPL enables effective zero-shot image classification, outperforming all baselines. These findings highlight the great potential in this paradigm of mining knowledge from CLIP for medical image classification and broader areas.

Published

2025-04-11

How to Cite

Liu, J., Hu, T., Du, J., Zhang, R., Zhou, J. T., & Liu, Z. (2025). KPL: Training-Free Medical Knowledge Mining of Vision-Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(18), 18852-18860. https://doi.org/10.1609/aaai.v39i18.34075

Issue

Section

AAAI Technical Track on Machine Learning IV