D-vlog: Multimodal Vlog Dataset for Depression Detection

Authors

  • Jeewoo Yoon Sungkyunkwan University, Seoul, Korea RAONDATA, Seoul, Korea
  • Chaewon Kang Sungkyunkwan University, Seoul, Korea
  • Seungbae Kim University of California, Los Angeles, USA
  • Jinyoung Han Sungkyunkwan University, Seoul, Korea RAONDATA, Seoul, Korea

DOI:

https://doi.org/10.1609/aaai.v36i11.21483

Keywords:

AI For Social Impact (AISI Track Papers Only)

Abstract

Detecting depression based on non-verbal behaviors has received great attention. However, most prior work on detecting depression mainly focused on detecting depressed individuals in laboratory settings, which are difficult to be generalized in practice. In addition, little attention has been paid to analyzing the non-verbal behaviors of depressed individuals in the wild. Therefore, in this paper, we present a multimodal depression dataset, D-Vlog, which consists of 961 vlogs (i.e., around 160 hours) collected from YouTube, which can be utilized in developing depression detection models based on the non-verbal behavior of individuals in real-world scenario. We develop a multimodal deep learning model that uses acoustic and visual features extracted from collected data to detect depression. Our proposed model employs the cross-attention mechanism to effectively capture the relationship across acoustic and visual features, and generates useful multimodal representations for depression detection. The extensive experimental results demonstrate that the proposed model significantly outperforms other baseline models. We believe our dataset and the proposed model are useful for analyzing and detecting depressed individuals based on non-verbal behavior.

Downloads

Published

2022-06-28

How to Cite

Yoon, J., Kang, C., Kim, S., & Han, J. (2022). D-vlog: Multimodal Vlog Dataset for Depression Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 12226-12234. https://doi.org/10.1609/aaai.v36i11.21483