inis 24(1):

Research Article

On the Consistency of 360 Video Quality Assessment in Repeated Subjective Tests: A Pilot Study

Download107 downloads
  • @ARTICLE{10.4108/eetinis.v11i1.4323,
        author={Majed Elwardy and Hans-Juergen Zepernick and Thi My Chinh Chu and Yan Hu},
        title={On the Consistency of 360 Video Quality Assessment in Repeated Subjective Tests: A Pilot Study},
        journal={EAI Endorsed Transactions on Industrial Networks and Intelligent Systems},
        volume={11},
        number={1},
        publisher={EAI},
        journal_a={INIS},
        year={2024},
        month={1},
        keywords={360 video, subjective tests, quality of experience, quality assessment, pilot study, annotated dataset, opportunity-limited conditions, standing viewing, seated viewing},
        doi={10.4108/eetinis.v11i1.4323}
    }
    
  • Majed Elwardy
    Hans-Juergen Zepernick
    Thi My Chinh Chu
    Yan Hu
    Year: 2024
    On the Consistency of 360 Video Quality Assessment in Repeated Subjective Tests: A Pilot Study
    INIS
    EAI
    DOI: 10.4108/eetinis.v11i1.4323
Majed Elwardy1, Hans-Juergen Zepernick1,*, Thi My Chinh Chu1, Yan Hu1
  • 1: Blekinge Institute of Technology
*Contact email: hans-jurgen.zepernick@bth.se

Abstract

Immersive media such as virtual reality, augmented reality, and 360◦ video have seen tremendous technological developments in recent years. Furthermore, the advances in head-mounted displays (HMDs) offer the users increased immersive experiences compared to conventional displays. To develop novel immersive media systems and services that satisfy the expectations of the users, it is essential to conduct subjective tests revealing users’ perceived quality of immersive media. However, due to the new viewing dimensions provided by HMDs and the potential of interacting with the content, a wide range of subjective tests are required to understand the many aspects of user behavior in and quality perception of immersive media. The ground truth obtained by such subjective tests enable the development of optimized immersive media systems that fulfill the expectations of the users. This article focuses on the consistency of 360◦ video quality assessment to reveal whether users’ subjective quality assessment of such immersive visual stimuli changes fundamentally over time or is kept consistent with each user having their own behavior signature. A pilot study was conducted under pandemic conditions with participants given the task of rating the quality of 360◦ video stimuli on an HMD in standing and seated viewing. The choice of conducting a pilot study is motivated by the fact that immersive media impose high cognitive load on the participants and the need to keep the number of participants under pandemic conditions as low as possible. To gain insight into the consistency of the participants’ 360◦ video assessment over time, three sessions were held for each participant and each viewing condition with long and short breaks between sessions. In particular, the opinion scores and head movements were recorded for each participant and each session in standing and seated viewing. The statistical analysis of this data leads to the conjecture that the quality rating stays consistent throughout these sessions with each participant having their own quality assessment signature. The head movements, indicating the participants’ scene exploration during the quality assessment task, also remain consistent for each participant according their individual narrower or wider scene exploration signature. These findings are more pronounced for standing viewing than for seated viewing. This work supports the role of pilot studies being a useful approach of conducting pre-tests on immersive media quality under opportunity-limited conditions and for the planning of subsequent full subjective tests with a large panel of participants. The annotated RQA360 dataset containing the data recorded in the repeated subjective tests is made publicly available to the research community.