TY - JOUR AU - Wu, Chenglei AU - Zhang, Ruixiao AU - Wang, Zhi AU - Sun, Lifeng PY - 2020/06/23 Y2 - 2024/03/28 TI - A Spherical Convolution Approach for Learning Long Term Viewport Prediction in 360 Immersive Video JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 01 SE - AAAI Technical Track: AI and the Web DO - 10.1609/aaai.v34i01.7377 UR - https://ojs.aaai.org/index.php/AAAI/article/view/7377 SP - 14003-14040 AB - <p>Viewport prediction for 360 video forecasts a viewer’s viewport when he/she watches a 360 video with a head-mounted display, which benefits many VR/AR applications such as 360 video streaming and mobile cloud VR. Existing studies based on planar convolutional neural network (CNN) suffer from the image distortion and split caused by the sphere-to-plane projection. In this paper, we start by proposing a spherical convolution based feature extraction network to distill spatial-temporal 360 information. We provide a solution for training such a network without a dedicated 360 image or video classification dataset. We differ with previous methods, which base their predictions on image pixel-level information, and propose a semantic content and preference based viewport prediction scheme. In this paper, we adopt a recurrent neural network (RNN) network to extract a user's personal preference of 360 video content from minutes of embedded viewing histories. We utilize this semantic preference as spatial attention to help network find the "interested'' regions on a future video. We further design a tailored mixture density network (MDN) based viewport prediction scheme, including viewport modeling, tailored loss function, etc, to improve efficiency and accuracy. Our extensive experiments demonstrate the rationality and performance of our method, which outperforms state-of-the-art methods, especially in long-term prediction.</p> ER -