RLSLM: A Hybrid Framework Combining Reinforcement Learning and a Rule-based Social Locomotion Model for Socially-aware Navigation

Authors

  • Yitian Kou School of Computer Science and Technology, East China Normal University.
  • Yihe Gu Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, East China Normal University.
  • Chen Zhou Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, East China Normal University. School of Psychology and Neuroscience, University of Glasgow.
  • Dandan Zhu School of Computer Science and Technology, East China Normal University.
  • Shu-Guang Kuai Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, East China Normal University. NYU-ECNU Institute of Brain and Cognitive Science. Shanghai Center for Brain Science and Brain-Inspired Technology.

DOI:

https://doi.org/10.1609/aaai.v40i1.37018

Abstract

Navigating human-populated environments without causing discomfort is a critical capability for socially-aware agents. While rule-based approaches offer interpretability through predefined psychological principles, they often lack generalizability and flexibility. Conversely, data-driven methods can learn complex behaviors from large-scale datasets, but are typically inefficient, opaque, and difficult to align with human intuitions. To bridge this gap, we propose RLSLM, a hybrid Reinforcement Learning framework that integrates a rule-based Social Locomotion Model, grounded in empirical behavioral experiments, into the reward function of a reinforcement learning framework. The social locomotion model generates an orientation-sensitive social comfort field that quantifies human comfort across space, enabling socially aligned navigation policies with minimal training. RLSLM then jointly optimizes mechanical energy and social comfort, allowing agents to avoid intrusions into personal or group space. A human-agent interaction experiment using an immersive VR-based setup demonstrates that RLSLM outperforms state-of-the-art rule-based models in user experience. Ablation and sensitivity analyses further show the model’s significantly improved interpretability over conventional data-driven methods. This work presents a scalable, human-centered methodology that effectively integrates cognitive science and machine learning for real-world social navigation.

Downloads

Published

2026-03-14

How to Cite

Kou, Y., Gu, Y., Zhou, C., Zhu, D., & Kuai, S.-G. (2026). RLSLM: A Hybrid Framework Combining Reinforcement Learning and a Rule-based Social Locomotion Model for Socially-aware Navigation. Proceedings of the AAAI Conference on Artificial Intelligence, 40(1), 543-551. https://doi.org/10.1609/aaai.v40i1.37018

Issue

Section

AAAI Technical Track on Application Domains I