Action Knowledge Transfer for Action Prediction with Partial Videos

Authors

  • Yijun Cai Sun Yat-sen University
  • Haoxin Li Sun Yat-sen University
  • Jian-Fang Hu Sun Yat-sen University
  • Wei-Shi Zheng Sun Yat-sen University

DOI:

https://doi.org/10.1609/aaai.v33i01.33018118

Abstract

Predicting action class from partially observed videos, which is known as action prediction, is an important task in computer vision field with many applications. The challenge for action prediction mainly lies in the lack of discriminative action information for the partially observed videos. To tackle this challenge, in this work, we propose to transfer action knowledge learned from fully observed videos for improving the prediction of partially observed videos. Specifically, we develop a two-stage learning framework for action knowledge transfer. At the first stage, we learn feature embeddings and discriminative action classifier from full videos. The knowledge in the learned embeddings and classifier is then transferred to the partial videos at the second stage. Our experiments on the UCF-101 and HMDB-51 datasets show that the proposed action knowledge transfer method can significantly improve the performance of action prediction, especially for the actions with small observation ratios (e.g., 10%). We also experimentally illustrate that our method outperforms all the state-of-the-art action prediction systems.

Downloads

Published

2019-07-17

How to Cite

Cai, Y., Li, H., Hu, J.-F., & Zheng, W.-S. (2019). Action Knowledge Transfer for Action Prediction with Partial Videos. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8118-8125. https://doi.org/10.1609/aaai.v33i01.33018118

Issue

Section

AAAI Technical Track: Vision