AVCAffe: A Large Scale Audio-Visual Dataset of Cognitive Load and Affect for Remote Work

Authors

  • Pritam Sarkar Queen's Univesity, Canada Vector Institute
  • Aaron Posen Queen's University, Canada
  • Ali Etemad Queen's University, Canada

DOI:

https://doi.org/10.1609/aaai.v37i1.25078

Keywords:

CMS: Affective Computing, HAI: Human-Computer Interaction

Abstract

We introduce AVCAffe, the first Audio-Visual dataset consisting of Cognitive load and Affect attributes. We record AVCAffe by simulating remote work scenarios over a video-conferencing platform, where subjects collaborate to complete a number of cognitively engaging tasks. AVCAffe is the largest originally collected (not collected from the Internet) affective dataset in English language. We recruit 106 participants from 18 different countries of origin, spanning an age range of 18 to 57 years old, with a balanced male-female ratio. AVCAffe comprises a total of 108 hours of video, equivalent to more than 58,000 clips along with task-based self-reported ground truth labels for arousal, valence, and cognitive load attributes such as mental demand, temporal demand, effort, and a few others. We believe AVCAffe would be a challenging benchmark for the deep learning research community given the inherent difficulty of classifying affect and cognitive load in particular. Moreover, our dataset fills an existing timely gap by facilitating the creation of learning systems for better self-management of remote work meetings, and further study of hypotheses regarding the impact of remote work on cognitive load and affective states.

Downloads

Published

2023-06-26

How to Cite

Sarkar, P., Posen, A., & Etemad, A. (2023). AVCAffe: A Large Scale Audio-Visual Dataset of Cognitive Load and Affect for Remote Work. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 76-85. https://doi.org/10.1609/aaai.v37i1.25078

Issue

Section

AAAI Technical Track on Cognitive Modeling & Cognitive Systems