LiveBot: Generating Live Video Comments Based on Visual and Textual Contexts

Authors

  • Shuming Ma Peking University
  • Lei Cui Microsoft Research Asia
  • Damai Dai Peking University
  • Furu Wei Microsoft Research Asia
  • Xu Sun Peking University

DOI:

https://doi.org/10.1609/aaai.v33i01.33016810

Abstract

We introduce the task of automatic live commenting. Live commenting, which is also called “video barrage”, is an emerging feature on online video sites that allows real-time comments from viewers to fly across the screen like bullets or roll at the right side of the screen. The live comments are a mixture of opinions for the video and the chit chats with other comments. Automatic live commenting requires AI agents to comprehend the videos and interact with human viewers who also make the comments, so it is a good testbed of an AI agent’s ability to deal with both dynamic vision and language. In this work, we construct a large-scale live comment dataset with 2,361 videos and 895,929 live comments. Then, we introduce two neural models to generate live comments based on the visual and textual contexts, which achieve better performance than previous neural baselines such as the sequence-to-sequence model. Finally, we provide a retrieval-based evaluation protocol for automatic live commenting where the model is asked to sort a set of candidate comments based on the log-likelihood score, and evaluated on metrics such as mean-reciprocal-rank. Putting it all together, we demonstrate the first “LiveBot”. The datasets and the codes can be found at https://github.com/lancopku/livebot.

Downloads

Published

2019-07-17

How to Cite

Ma, S., Cui, L., Dai, D., Wei, F., & Sun, X. (2019). LiveBot: Generating Live Video Comments Based on Visual and Textual Contexts. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6810-6817. https://doi.org/10.1609/aaai.v33i01.33016810

Issue

Section

AAAI Technical Track: Natural Language Processing