Multi-Label Few-Shot ICD Coding as Autoregressive Generation with Prompt

Authors

  • Zhichao Yang College of Information and Computer Sciences, University of Massachusetts Amherst
  • Sunjae Kwon College of Information and Computer Sciences, University of Massachusetts Amherst
  • Zonghai Yao College of Information and Computer Sciences, University of Massachusetts Amherst
  • Hong Yu College of Information and Computer Sciences, University of Massachusetts Amherst Department of Computer Science, University of Massachusetts Lowell Center for Healthcare Organization and Implementation Research, Veterans Affairs Bedford Healthcare System

DOI:

https://doi.org/10.1609/aaai.v37i4.25668

Keywords:

APP: Healthcare, Medicine & Wellness, ML: Multi-Class/Multi-Label Learning & Extreme Classification, SNLP: Applications, SNLP: Generation, SNLP: Language Models, SNLP: Text Classification

Abstract

Automatic International Classification of Diseases (ICD) coding aims to assign multiple ICD codes to a medical note with an average of 3,000+ tokens. This task is challenging due to the high-dimensional space of multi-label assignment (155,000+ ICD code candidates) and the long-tail challenge - Many ICD codes are infrequently assigned yet infrequent ICD codes are important clinically. This study addresses the long-tail challenge by transforming this multi-label classification task into an autoregressive generation task. Specifically, we first introduce a novel pretraining objective to generate free text diagnosis and procedure descriptions using the SOAP structure, the medical logic physicians use for note documentation. Second, instead of directly predicting the high dimensional space of ICD codes, our model generates the lower dimension of text descriptions, which then infer ICD codes. Third, we designed a novel prompt template for multi-label classification. We evaluate our Generation with Prompt (GP) model with the benchmark of all code assignment (MIMIC-III-full) and few shot ICD code assignment evaluation benchmark (MIMIC-III-few). Experiments on MIMIC-III-few show that our model performs with a marco F1 30.2, which substantially outperforms the previous MIMIC-III-full SOTA model (marco F1 4.3) and the model specifically designed for few/zero shot setting (marco F1 18.7). Finally, we design a novel ensemble learner, a cross attention reranker with prompts, to integrate previous SOTA and our best few-shot coding predictions. Experiments on MIMIC-III-full show that our ensemble learner substantially improves both macro and micro F1, from 10.4 to 14.6 and from 58.2 to 59.1, respectively.

Downloads

Published

2023-06-26

How to Cite

Yang, Z., Kwon, S., Yao, Z., & Yu, H. (2023). Multi-Label Few-Shot ICD Coding as Autoregressive Generation with Prompt. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 5366-5374. https://doi.org/10.1609/aaai.v37i4.25668

Issue

Section

AAAI Technical Track on Domain(s) of Application