Self-Supervised Pre-training for Protein Embeddings Using Tertiary Structures

Authors

  • Yuzhi Guo University of Texas at Arlington
  • Jiaxiang Wu Tencent AI Lab
  • Hehuan Ma University of Texas at Arlington
  • Junzhou Huang University of Texas at Arlington

DOI:

https://doi.org/10.1609/aaai.v36i6.20636

Keywords:

Machine Learning (ML)

Abstract

The protein tertiary structure largely determines its interaction with other molecules. Despite its importance in various structure-related tasks, fully-supervised data are often time-consuming and costly to obtain. Existing pre-training models mostly focus on amino-acid sequences or multiple sequence alignments, while the structural information is not yet exploited. In this paper, we propose a self-supervised pre-training model for learning structure embeddings from protein tertiary structures. Native protein structures are perturbed with random noise, and the pre-training model aims at estimating gradients over perturbed 3D structures. Specifically, we adopt SE(3)-invariant features as model inputs and reconstruct gradients over 3D coordinates with SE(3)-equivariance preserved. Such paradigm avoids the usage of sophisticated SE(3)-equivariant models, and dramatically improves the computational efficiency of pre-training models. We demonstrate the effectiveness of our pre-training model on two downstream tasks, protein structure quality assessment (QA) and protein-protein interaction (PPI) site prediction. Hierarchical structure embeddings are extracted to enhance corresponding prediction models. Extensive experiments indicate that such structure embeddings consistently improve the prediction accuracy for both downstream tasks.

Downloads

Published

2022-06-28

How to Cite

Guo, Y., Wu, J., Ma, H., & Huang, J. (2022). Self-Supervised Pre-training for Protein Embeddings Using Tertiary Structures. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6801-6809. https://doi.org/10.1609/aaai.v36i6.20636

Issue

Section

AAAI Technical Track on Machine Learning I