ProtCLIP: Function-Informed Protein Multi-Modal Learning

Authors

  • Hanjing Zhou College of Computer Science and Technology, Zhejiang University State Key Laboratory of Transvascular Implantation Devices of The Second Affiliated Hospital, Zhejiang University Alibaba Cloud Computing
  • Mingze Yin College of Computer Science and Technology, Zhejiang University State Key Laboratory of Transvascular Implantation Devices of The Second Affiliated Hospital, Zhejiang University
  • Wei Wu School of Artificial Intelligence and Data Science, University of Science and Technology of China
  • Mingyang Li Alibaba Cloud Computing
  • Kun Fu Alibaba Cloud Computing
  • Jintai Chen AI Thrust, Information Hub, HKUST(Guangzhou)
  • Jian Wu State Key Laboratory of Transvascular Implantation Devices of The Second Affiliated Hospital, Zhejiang University Zhejiang Key Laboratory of Medical Imaging Artificial Intelligence
  • Zheng Wang Alibaba Cloud Computing

DOI:

https://doi.org/10.1609/aaai.v39i21.34456

Abstract

Multi-modality pre-training paradigm that aligns protein sequences and biological descriptions has learned general protein representations and achieved promising performance in various downstream applications. However, these works were still unable to replicate the extraordinary success of language-supervised visual foundation models due to the ineffective usage of aligned protein-text paired data and the lack of an effective function-informed pre-training paradigm. To address these issues, this paper curates a large-scale protein-text paired dataset called ProtAnno with a property-driven sampling strategy, and introduces a novel function-informed protein pre-training paradigm. Specifically, the sampling strategy determines selecting probability based on the sample confidence and property coverage, balancing the data quality and data quantity in face of large-scale noisy data. Furthermore, motivated by significance of the protein specific functional mechanism, the proposed paradigm explicitly model protein static and dynamic functional segments by two segment-wise pre-training objectives, injecting fine-grained information in a function-informed manner. Leveraging all these innovations, we develop ProtCLIP, a multi-modality foundation model that comprehensively represents function-aware protein embeddings. On 22 different protein benchmarks within 5 types, including protein functionality classification, mutation effect prediction, cross-modal transformation, semantic similarity inference and protein-protein interaction prediction, our ProtCLIP consistently achieves SOTA performance, with remarkable improvements of 75% on average in five cross-modal transformation benchmarks, 59.9% in GO-CC and 39.7% in GO-BP protein function prediction. The experimental results verify the extraordinary potential of ProtCLIP serving as the protein multi-modality foundation model.

Published

2025-04-11

How to Cite

Zhou, H., Yin, M., Wu, W., Li, M., Fu, K., Chen, J., … Wang, Z. (2025). ProtCLIP: Function-Informed Protein Multi-Modal Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(21), 22937–22945. https://doi.org/10.1609/aaai.v39i21.34456

Issue

Section

AAAI Technical Track on Machine Learning VII