TY - JOUR AU - Chen, Kai AU - Wei, Zhipeng AU - Chen, Jingjing AU - Wu, Zuxuan AU - Jiang, Yu-Gang PY - 2022/06/28 Y2 - 2024/03/28 TI - Attacking Video Recognition Models with Bullet-Screen Comments JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 36 IS - 1 SE - AAAI Technical Track on Computer Vision I DO - 10.1609/aaai.v36i1.19907 UR - https://ojs.aaai.org/index.php/AAAI/article/view/19907 SP - 312-320 AB - Recent research has demonstrated that Deep Neural Networks (DNNs) are vulnerable to adversarial patches which introduce perceptible but localized changes to the input. Nevertheless, existing approaches have focused on generating adversarial patches on images, their counterparts in videos have been less explored. Compared with images, attacking videos is much more challenging as it needs to consider not only spatial cues but also temporal cues. To close this gap, we introduce a novel adversarial attack in this paper, the bullet-screen comment (BSC) attack, which attacks video recognition models with BSCs. Specifically, adversarial BSCs are generated with a Reinforcement Learning (RL) framework, where the environment is set as the target model and the agent plays the role of selecting the position and transparency of each BSC. By continuously querying the target models and receiving feedback, the agent gradually adjusts its selection strategies in order to achieve a high fooling rate with non-overlapping BSCs. As BSCs can be regarded as a kind of meaningful patch, adding it to a clean video will not affect people’s understanding of the video content, nor will arouse people’s suspicion. We conduct extensive experiments to verify the effectiveness of the proposed method. On both UCF-101 and HMDB-51 datasets, our BSC attack method can achieve about 90% fooling rate when attacking three mainstream video recognition models, while only occluding < 8% areas in the video. Our code is available at https://github.com/kay-ck/BSC-attack. ER -