HumanSense: From Multimodal Perception to Empathetic Context-Aware Responses Through Reasoning MLLMs

Authors

  • Zheng Qin National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, China
  • Ruobing Zheng AntGroup, China
  • Yabing Wang National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, China
  • Tianqi Li AntGroup, China
  • Yi Yuan AntGroup, China
  • Jingdong Chen AntGroup, China
  • Le Wang National Key Laboratory of Human-Machine Hybrid Augmented Intelligence, National Engineering Research Center for Visual Information and Applications, Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, China

DOI:

https://doi.org/10.1609/aaai.v40i30.39685

Abstract

While Multimodal Large Language Models (MLLMs) show immense promise for achieving truly human-like interactions, progress is hindered by the lack of fine-grained evaluation frameworks for human-centered scenarios, encompassing both the understanding of complex human intentions and the provision of empathetic, context-aware responses. Here we introduce HumanSense, a comprehensive benchmark designed to evaluate the human-centered perception and interaction capabilities of MLLMs, with a particular focus on deep understanding of extended multimodal contexts and the formulation of rational feedback. Our evaluation reveals that leading MLLMs still have considerable room for improvement, particularly for advanced interaction-oriented tasks. Supplementing visual input with audio and text information yields substantial improvements, and Omni-modal models show advantages on these tasks.Furthermore, grounded in the observation that appropriate feedback stems from a contextual analysis of the interlocutor's needs and emotions, we posit that reasoning ability serves as the key to unlocking it. We devise a multi-stage, modality-progressive reinforcement learning approach, resulting in HumanSense-Omni-Reasoning, which substantially enhances performance on higher-level understanding and interactive tasks. Additionally, we observe that successful reasoning processes appear to exhibit consistent thought patterns. By designing corresponding prompts, we also enhance the performance of non-reasoning models in a training-free manner.

Downloads

Published

2026-03-14

How to Cite

Qin, Z., Zheng, R., Wang, Y., Li, T., Yuan, Y., Chen, J., & Wang, L. (2026). HumanSense: From Multimodal Perception to Empathetic Context-Aware Responses Through Reasoning MLLMs. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 24973–24981. https://doi.org/10.1609/aaai.v40i30.39685

Issue

Section

AAAI Technical Track on Machine Learning VII