Research Article
Two-Layer FoV Prediction Model for Viewport Dependent Streaming of 360-Degree Videos
@INPROCEEDINGS{10.1007/978-3-030-06161-6_49, author={Yunqiao Li and Yiling Xu and Shaowei Xie and Liangji Ma and Jun Sun}, title={Two-Layer FoV Prediction Model for Viewport Dependent Streaming of 360-Degree Videos}, proceedings={Communications and Networking. 13th EAI International Conference, ChinaCom 2018, Chengdu, China, October 23-25, 2018, Proceedings}, proceedings_a={CHINACOM}, year={2019}, month={1}, keywords={Omnidirectional video Field of view prediction FoV-based transmission}, doi={10.1007/978-3-030-06161-6_49} }
- Yunqiao Li
Yiling Xu
Shaowei Xie
Liangji Ma
Jun Sun
Year: 2019
Two-Layer FoV Prediction Model for Viewport Dependent Streaming of 360-Degree Videos
CHINACOM
Springer
DOI: 10.1007/978-3-030-06161-6_49
Abstract
As the representative and most widely used content form of Virtual Reality (VR) application, omnidirectional videos provide immersive experience for users with 360-degree scenes rendered. Since only part of the omnidirectional video can be viewed at a time due to human’s eye characteristics, field of view (FoV) based transmission has been proposed by ensuring high quality in the FoV while reducing the quality out of that to lower the amount of transmission data. In this case, transient content quality reduction will occur when the user’s FoV changes, which can be improved by predicting the FoV beforehand. In this paper, we propose a two-layer model for FoV prediction. The first layer detects the heat maps of content in offline process, while the second layer predicts the FoV of a specific user online during his/her viewing period. We utilize a LSTM model to calculate the viewing probability of each region given the results from the first layer, the user’s previous orientations and the navigation speed. In addition, we set up a correction model to check and correct the unreasonable results. The performance evaluation shows that our model obtains higher accuracy and less undulation compared with widely used approaches.