CARE-Bench: A Benchmark of Diverse Client Simulations Guided by Expert Principles for Evaluating LLMs in Psychological Counseling

Authors

  • Bichen Wang Harbin Institute of Technology
  • Yixin Sun Harbin Institute of Technology
  • Junzhe Wang Harbin Institute of Technology
  • Hao Yang Harbin Institute of Technology
  • Xing Fu Harbin Institute of Technology
  • Yanyan Zhao Harbin Institute of Technology
  • Si Wei iFLYTEK Co., Ltd.
  • Shijin Wang iFLYTEK Co., Ltd.
  • Bing Qin Harbin Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v40i46.41287

Abstract

The mismatch between the growing demand for psychological counseling and the limited availability of services has motivated research into the application of Large Language Models (LLMs) in this domain. Consequently, there is a need for a robust and unified benchmark to assess the counseling competence of various LLMs. Existing works, however, are limited by unprofessional client simulation, static question-and-answer evaluation formats, and unidimensional metrics. These limitations hinder their effectiveness in assessing a model's comprehensive ability to handle diverse and complex clients. To address this gap, we introduce CARE-Bench, a dynamic and interactive automated benchmark. It is built upon diverse client profiles derived from real-world counseling cases and simulated according to expert guidelines. CARE-Bench provides a multidimensional performance evaluation grounded in established psychological scales. Using CARE-Bench, we evaluate several general-purpose LLMs and specialized counseling models, revealing their current limitations. In collaboration with psychologists, we conduct a detailed analysis of the reasons for LLMs' failures when interacting with clients of different types, which provides directions for developing more comprehensive, universal, and effective counseling models.

Downloads

Published

2026-03-14

How to Cite

Wang, B., Sun, Y., Wang, J., Yang, H., Fu, X., Zhao, Y., … Qin, B. (2026). CARE-Bench: A Benchmark of Diverse Client Simulations Guided by Expert Principles for Evaluating LLMs in Psychological Counseling. Proceedings of the AAAI Conference on Artificial Intelligence, 40(46), 39378–39386. https://doi.org/10.1609/aaai.v40i46.41287