PPGPT: Transferring Next-Token Modeling from Language to PPG Signals

Authors

  • Zexing Zhang Changchun University of Technology Jilin Province Smart Health Joint Innovation Laboratory for the New Generation of AI Jilin Province Science and Technology Innovation Center for Multimodal Cognitive Computing and Analysis of Medical Biometrics
  • Huimin Lu Changchun University of Technology Jilin Province Smart Health Joint Innovation Laboratory for the New Generation of AI Jilin Province Science and Technology Innovation Center for Multimodal Cognitive Computing and Analysis of Medical Biometrics
  • Qingxin Zhao Changchun University of Technology Jilin Province Smart Health Joint Innovation Laboratory for the New Generation of AI Jilin Province Science and Technology Innovation Center for Multimodal Cognitive Computing and Analysis of Medical Biometrics

DOI:

https://doi.org/10.1609/aaai.v40i34.40088

Abstract

The success of large language models (LLMs) in cognitive tasks prompts the question of whether their next-token prediction (NTP) paradigm can be adapted to model physiological signals from wearable devices. A key target for this adaptation is photoplethysmography (PPG), the most prevalent sensing modality in consumer wearables for non-invasive monitoring of diverse physiological conditions. Unlike in NLP, where NTP aligns with generative objectives, physiological signal analysis involves fundamentally different tasks, such as continuous parameter estimation (regression) and discrete state recognition (classification). This disparity creates a semantic mismatch between the pre-training paradigm and the downstream tasks. To bridge this gap, we propose PPGPT, the first foundation model that reformulates NTP into next-feature token prediction (NFTP), learning hierarchical feature transition probabilities to unify pre-training and downstream objectives. PPGPT features a novel dual-stream encoder that generates feature tokens by jointly modeling temporal dynamics and local-global morphological patterns. The model is developed using a two-stage training framework: it is first pre-trained on a large-scale mixed dataset of 1.6 billion data points and then validated on our newly released BioMTL benchmark, which includes data from 172 subjects over 285 days across seven different tasks. Extensive experiments show that PPGPT significantly outperforms competing methods, achieving a 16.5% improvement in F1-score and a 25.9% reduction in Mean Absolute Error (MAE). Furthermore, the model demonstrates robust few-shot learning capabilities.

Downloads

Published

2026-03-14

How to Cite

Zhang, Z., Lu, H., & Zhao, Q. (2026). PPGPT: Transferring Next-Token Modeling from Language to PPG Signals. Proceedings of the AAAI Conference on Artificial Intelligence, 40(34), 28573–28581. https://doi.org/10.1609/aaai.v40i34.40088

Issue

Section

AAAI Technical Track on Machine Learning XI