PASS: Patch Automatic Skip Scheme for Efficient Real-Time Video Perception on Edge Devices

Authors

  • Qihua Zhou The Hong Kong Polytechnic University
  • Song Guo The Hong Kong Polytechnic University
  • Jun Pan The Hong Kong Polytechnic University
  • Jiacheng Liang Pennsylvania State University
  • Zhenda Xu Hong Kong Polytechnic University
  • Jingren Zhou Alibaba Group

DOI:

https://doi.org/10.1609/aaai.v37i3.25491

Keywords:

CV: Image and Video Retrieval, CV: Motion & Tracking, CV: Representation Learning for Vision

Abstract

Real-time video perception tasks are often challenging over the resource-constrained edge devices due to the concerns of accuracy drop and hardware overhead, where saving computations is the key to performance improvement. Existing methods either rely on domain-specific neural chips or priorly searched models, which require specialized optimization according to different task properties. In this work, we propose a general and task-independent Patch Automatic Skip Scheme (PASS), a novel end-to-end learning pipeline to support diverse video perception settings by decoupling acceleration and tasks. The gist is to capture the temporal similarity across video frames and skip the redundant computations at patch level, where the patch is a non-overlapping square block in visual. PASS equips each convolution layer with a learnable gate to selectively determine which patches could be safely skipped without degrading model accuracy. As to each layer, a desired gate needs to make flexible skip decisions based on intermediate features without any annotations, which cannot be achieved by conventional supervised learning paradigm. To address this challenge, we are the first to construct a tough self-supervisory procedure for optimizing these gates, which learns to extract contrastive representation, i.e., distinguishing similarity and difference, from frame sequence. These high-capacity gates can serve as a plug-and-play module for convolutional neural network (CNN) backbones to implement patch-skippable architectures, and automatically generate proper skip strategy to accelerate different video-based downstream tasks, e.g., outperforming the state-of-the-art MobileHumanPose (MHP) in 3D pose estimation and FairMOT in multiple object tracking, by up to 9.43 times and 12.19 times speedups, respectively. By directly processing the raw data of frames, PASS can generalize to real-time video streams on commodity edge devices, e.g., NVIDIA Jetson Nano, with efficient performance in realistic deployment.

Downloads

Published

2023-06-26

How to Cite

Zhou, Q., Guo, S., Pan, J., Liang, J., Xu, Z., & Zhou, J. (2023). PASS: Patch Automatic Skip Scheme for Efficient Real-Time Video Perception on Edge Devices. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3787-3795. https://doi.org/10.1609/aaai.v37i3.25491

Issue

Section

AAAI Technical Track on Computer Vision III