Note2Chat: Improving LLMs for Multi-Turn Clinical History Taking Using Medical Notes

Authors

  • Yang Zhou Institute of High Performance Computing, Singapore, A*STAR
  • Zhenting Sheng Nanyang Technological University
  • Mingrui Tan Institute of High Performance Computing, Singapore, A*STAR
  • Yuting Song Institute of High Performance Computing, Singapore, A*STAR
  • Jun Zhou Institute of High Performance Computing, Singapore, A*STAR
  • Yu Heng Kwan National University of Singapore Singapore General Hospital
  • Lian Leng Low National University of Singapore Singapore General Hospital
  • Yang Bai Institute of High Performance Computing, Singapore, A*STAR
  • Yong Liu Institute of High Performance Computing, Singapore, A*STAR

DOI:

https://doi.org/10.1609/aaai.v40i41.40821

Abstract

Effective clinical history taking is a foundational yet underexplored component of clinical reasoning. While large language models (LLMs) have shown promise on static benchmarks, they often fall short in dynamic, multi-turn diagnostic settings that require iterative questioning and hypothesis refinement. To address this gap, we propose Note2Chat, a note-driven framework that trains LLMs to conduct structured history taking and diagnosis by learning from widely available medical notes. Instead of relying on scarce and sensitive dialogue data, we convert real-world medical notes into high-quality doctor-patient dialogues using a decision tree-guided generation and refinement pipeline. We then propose a three-stage fine-tuning strategy combining supervised learning, simulated data augmentation, and preference learning. Furthermore, we propose a novel single-turn reasoning paradigm that reframes history taking as a sequence of single-turn reasoning problems. This design enhances interpretability and enables local supervision, dynamic adaptation, and greater sample efficiency. Experimental results show that our method substantially improves clinical reasoning, achieving gains of +16.9 F1 and +21.0 Top-1 diagnostic accuracy over GPT-4o.

Published

2026-03-14

How to Cite

Zhou, Y., Sheng, Z., Tan, M., Song, Y., Zhou, J., Kwan, Y. H., … Liu, Y. (2026). Note2Chat: Improving LLMs for Multi-Turn Clinical History Taking Using Medical Notes. Proceedings of the AAAI Conference on Artificial Intelligence, 40(41), 35149–35157. https://doi.org/10.1609/aaai.v40i41.40821

Issue

Section

AAAI Technical Track on Natural Language Processing VI