TY - JOUR AU - Deng, Didan AU - Chen, Zhaokang AU - Zhou, Yuqian AU - Shi, Bertram PY - 2020/04/03 Y2 - 2024/03/28 TI - MIMAMO Net: Integrating Micro- and Macro-Motion for Video Emotion Recognition JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 03 SE - AAAI Technical Track: Humans and AI DO - 10.1609/aaai.v34i03.5646 UR - https://ojs.aaai.org/index.php/AAAI/article/view/5646 SP - 2621-2628 AB - <p>Spatial-temporal feature learning is of vital importance for video emotion recognition. Previous deep network structures often focused on macro-motion which extends over long time scales, e.g., on the order of seconds. We believe integrating structures capturing information about both micro- and macro-motion will benefit emotion prediction, because human perceive both micro- and macro-expressions. In this paper, we propose to combine micro- and macro-motion features to improve video emotion recognition with a two-stream recurrent network, named MIMAMO (Micro-Macro-Motion) Net. Specifically, smaller and shorter micro-motions are analyzed by a two-stream network, while larger and more sustained macro-motions can be well captured by a subsequent recurrent network. Assigning specific interpretations to the roles of different parts of the network enables us to make choice of parameters based on prior knowledge: choices that turn out to be optimal. One of the important innovations in our model is the use of interframe phase differences rather than optical flow as input to the temporal stream. Compared with the optical flow, phase differences require less computation and are more robust to illumination changes. Our proposed network achieves state of the art performance on two video emotion datasets, the OMG emotion dataset and the Aff-Wild dataset. The most significant gains are for arousal prediction, for which motion information is intuitively more informative. Source code is available at https://github.com/wtomin/MIMAMO-Net.</p> ER -