Improving Large Molecular Language Model via Relation-aware Multimodal Collaboration

Authors

  • Jinyoung Park Korea Advanced Institute of Science & Technology
  • Minseong Bae Korea Advanced Institute of Science & Technology
  • Jeehye Na Korea Advanced Institute of Science & Technology
  • Hyunwoo J. Kim Korea Advanced Institute of Science & Technology

DOI:

https://doi.org/10.1609/aaai.v40i2.37058

Abstract

Large language models (LLMs) have demonstrated their instruction-following capabilities and achieved powerful performance on various tasks. Inspired by their success, recent works in the molecular domain have led to the development of large molecular language models (LMLMs) that integrate 1D molecular strings or 2D molecular graphs into the language models. However, existing LMLMs often suffer from hallucination and limited robustness, largely due to inadequate integration of diverse molecular modalities such as 1D sequences, 2D molecular graphs, and 3D conformations. To address these limitations, we propose CoLLaMo, a large language model-based molecular assistant equipped with a multi-level molecular modality-collaborative projector. The relation-aware modality-collaborative attention mechanism in the projector facilitates fine-grained and relation-guided information exchange between atoms by incorporating 2D structural and 3D spatial relations. Furthermore, we present a molecule-centric new automatic measurement, including a hallucination assessment metric and GPT-based caption quality evaluation to address the limitations of token-based generic evaluation metrics (i.e., BLEU) widely used in assessing molecular comprehension of LMLMs. Our extensive experiments demonstrate that our CoLLaMo enhances the molecular modality generalization capabilities of LMLMs, achieving the best performance on multiple tasks, including molecule captioning, computed property QA, descriptive property QA, motif counting, and IUPAC name prediction.

Downloads

Published

2026-03-14

How to Cite

Park, J., Bae, M., Na, J., & Kim, H. J. (2026). Improving Large Molecular Language Model via Relation-aware Multimodal Collaboration. Proceedings of the AAAI Conference on Artificial Intelligence, 40(2), 899-907. https://doi.org/10.1609/aaai.v40i2.37058

Issue

Section

AAAI Technical Track on Application Domains II