Dynamic Concept Composition for Zero-Example Event Detection

Authors

  • Xiaojun Chang University of Technology Sydney
  • Yi Yang University of Technology Sydney
  • Guodong Long University of Technology Sydney
  • Chengqi Zhang University of Technology Sydney
  • Alexander Hauptmann Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v30i1.10474

Keywords:

Event Detectionl, Zero-Example Event Detection, Dynamic Concept Composition

Abstract

In this paper, we focus on automatically detecting events in unconstrained videos without the use of any visual training exemplars. In principle, zero-shot learning makes it possible to train an event detection model based on the assumption that events (e.g. birthday party) can be described by multiple mid-level semantic concepts (e.g. ``blowing candle'', ``birthday cake''). Towards this goal, we first pre-train a bundle of concept classifiers using data from other sources. Then we evaluate the semantic correlation of each concept w.r.t. the event of interest and pick up the relevant concept classifiers, which are applied on all test videos to get multiple prediction score vectors. While most existing systems combine the predictions of the concept classifiers with fixed weights, we propose to learn the optimal weights of the concept classifiers for each testing video by exploring a set of online available videos with free-form text descriptions of their content. To validate the effectiveness of the proposed approach, we have conducted extensive experiments on the latest TRECVID MEDTest 2014, MEDTest 2013 and CCV dataset. The experimental results confirm the superiority of the proposed approach.

Downloads

Published

2016-03-05

How to Cite

Chang, X., Yang, Y., Long, G., Zhang, C., & Hauptmann, A. (2016). Dynamic Concept Composition for Zero-Example Event Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10474