From Stimuli to Minds: Enhancing Psychological Reasoning in LLMs via Bilateral Reinforcement Learning

Authors

  • Yichao Feng Nanyang Technological University
  • Haoran Luo Nanyang Technological University
  • Lang Feng Nanyang Technological University
  • Shuai Zhao Nanyang Technological University
  • Anh Tuan Luu Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v40i1.36988

Abstract

Large Language Models show promise in emotion understanding, social reasoning, and empathy, yet struggle with psychologically grounded tasks requiring inference of implicit mental states in complex, socially and contextually ambiguous settings. These limitations stem from lacking theory-aligned supervision and difficulty capturing nuanced mental processes in real-world narratives. To bridge this gap, we leverage expert-labeled scenarios and propose a trajectory-aware reinforcement learning framework imitating expert psychological reasoning. By integrating real-world stimuli with structured reasoning guidance, our approach enables compact models to internalize social-cognitive principles, perform nuanced inference, and support continual self-improvement. Experiments across benchmarks show expert-level interpretive capability across psychological tasks.

Downloads

Published

2026-03-14

How to Cite

Feng, Y., Luo, H., Feng, L., Zhao, S., & Luu, A. T. (2026). From Stimuli to Minds: Enhancing Psychological Reasoning in LLMs via Bilateral Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 40(1), 274-282. https://doi.org/10.1609/aaai.v40i1.36988

Issue

Section

AAAI Technical Track on Application Domains I