Shibo WangShusen YangHairong SuCong ZhaoChenren XuFeng QianNanbin WangZongben Xu
Mobile 360-degree video streaming has grown significantly in popularity but the quality of experience (QoE) suffers from insufficient and variable wireless network bandwidth. Recently, saliency-driven 360-degree streaming overcomes the buffer size limitation of head movement trajectory (HMT)-driven solutions and thus strikes a better balance between video quality and rebuffering. However, inaccurate network estimations and intrinsic saliency bias still challenge saliency-based streaming approaches, limiting further QoE improvement. To address these challenges, we design a robust saliency-driven quality adaptation algorithm for 360-degree video streaming, RoSal360. Specifically, we present a practical, tile-size-aware deep neural network (DNN) model with a decoupled self-attention architecture to accurately and efficiently predict the transmission time of video tiles. Moreover, we design a reinforcement learning (RL)-driven online correction algorithm to robustly compensate the improper quality allocations due to saliency bias. Through extensive prototype evaluations over real wireless network environments including commodity WiFi, 4G/LTE, and 5G links in the wild, RoSal360 significantly enhances the video quality and reduces the rebuffering ratio, thereby improving the viewer QoE, compared to the state-of-the-art algorithms.
Haotian GuoFeng WangWei ZhangYifei ZhuLaizhong CuiJiangchuan LiuF. Richard YuLei Zhang
Christian KochArne RakMichael ZinkRalf SteinmetzAmr Rizk
Igor D. D. CurcioHenri ToukomaaDeepa Naik
Igor D. D. CurcioHenri ToukomaaDeepa Naik
Chuong Hoang VoJui‐Chiu ChiangDuy H. LeThu NguyenTuan Van Pham