GRASP: Generic Framework for Health Status Representation Learning Based on Incorporating Knowledge from Similar Patients

Authors

  • Chaohe Zhang Key Laboratory of High Confidence Software Technologies, Ministry of Education, Beijing, China School of Electronics Engineering and Computer Science, Peking University, Beijing, China
  • Xin Gao Key Laboratory of High Confidence Software Technologies, Ministry of Education, Beijing, China School of Electronics Engineering and Computer Science, Peking University, Beijing, China
  • Liantao Ma Key Laboratory of High Confidence Software Technologies, Ministry of Education, Beijing, China School of Electronics Engineering and Computer Science, Peking University, Beijing, China
  • Yasha Wang Key Laboratory of High Confidence Software Technologies, Ministry of Education, Beijing, China National Engineering Research Center of Software Engineering, Peking University, Beijing, China
  • Jiangtao Wang The Centre for Intelligent Healthcare, Coventry University, UK
  • Wen Tang Division of Nephrology, Peking University Third Hospital, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v35i1.16152

Keywords:

Healthcare, Medicine & Wellness

Abstract

Deep learning models have been applied to many healthcare tasks based on electronic medical records (EMR) data and shown substantial performance. Existing methods commonly embed the records of a single patient into a representation for medical tasks. Such methods learn inadequate representations and lead to inferior performance, especially when the patient’s data is sparse or low-quality. Aiming at the above problem, we propose GRASP, a generic framework for healthcare models. For a given patient, GRASP first finds patients in the dataset who have similar conditions and similar results (i.e., the similar patients), and then enhances the representation learning and prognosis of the given patient by leveraging knowledge extracted from these similar patients. GRASP defines similarities with different meanings between patients for different clinical tasks, and finds similar patients with useful information accordingly, and then learns cohort representation to extract valuable knowledge contained in the similar patients. The cohort information is fused with the current patient’s representation to conduct final clinical tasks. Experimental evaluations on two real-world datasets show that GRASP can be seamlessly integrated into state-of-the-art models with consistent performance improvements. Besides, under the guidance of medical experts, we verified the findings extracted by GRASP, and the findings are consistent with the existing medical knowledge, indicating that GRASP can generate useful insights for relevant predictions.

Downloads

Published

2021-05-18

How to Cite

Zhang, C., Gao, X., Ma, L., Wang, Y., Wang, J., & Tang, W. (2021). GRASP: Generic Framework for Health Status Representation Learning Based on Incorporating Knowledge from Similar Patients. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1), 715-723. https://doi.org/10.1609/aaai.v35i1.16152

Issue

Section

AAAI Technical Track on Application Domains