JoLT: Jointly Learned Representations of Language and Time-Series for Clinical Time-Series Interpretation (Student Abstract)
DOI:
https://doi.org/10.1609/aaai.v38i21.30423Keywords:
Time-series, Foundation Models, AI For Healthcare, Representation Learning, Text Generation, Language ModelsAbstract
Time-series and text data are prevalent in healthcare and frequently co-exist, yet they are typically modeled in isolation. Even studies that jointly model time-series and text, do so by converting time-series to images or graphs. We hypothesize that explicitly modeling time-series jointly with text can improve tasks such as summarization and question answering for time-series data, which have received little attention so far. To address this gap, we introduce JoLT to jointly learn desired representations from pre-trained time-series and text models. JoLT utilizes a Querying Transformer (Q-Former) to align the time-series and text representations. Our experiments on a large real-world electrocardiography dataset for medical time-series summarization show that JoLT outperforms state-of-the-art image captioning approaches.Downloads
Published
2024-03-24
How to Cite
Cai, Y., Srinivasan, A., Goswami, M., Choudhry, A., & Dubrawski, A. (2024). JoLT: Jointly Learned Representations of Language and Time-Series for Clinical Time-Series Interpretation (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23447-23448. https://doi.org/10.1609/aaai.v38i21.30423
Issue
Section
AAAI Student Abstract and Poster Program