Paper ID | SMR-2.2 | ||
Paper Title | A TEMPORAL STATISTICS MODEL FOR UGC VIDEO QUALITY PREDICTION | ||
Authors | Zhengzhong Tu, Chia-Ju Chen, University of Texas at Austin, United States; Yilin Wang, Neil Birkbeck, Balu Adsumilli, Google Inc., United States; Alan Bovik, University of Texas at Austin, United States | ||
Session | SMR-2: Perception and Quality Models | ||
Location | Area F | ||
Session Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation Time: | Wednesday, 22 September, 14:30 - 16:00 | ||
Presentation | Poster | ||
Topic | Image and Video Sensing, Modeling, and Representation: Perception and quality models for images & video | ||
IEEE Xplore Open Preview | Click here to view in IEEE Xplore | ||
Abstract | Blind video quality assessment of user-generated content (UGC) has become a trending and challenging problem. Previous studies have shown the efficacy of natural scene statistics for capturing spatial distortions. The exploration of temporal video statistics on UGC, however, is relatively limited. Here we propose the first general, effective and efficient temporal statistics model accounting for temporal- or motion-related distortions for UGC video quality assessment, by analyzing regularities in the temporal bandpass domain. The proposed temporal model can serve as a plug-in module to boost existing no-reference video quality predictors that lack motion-relevant features. Our experimental results on recent large-scale UGC video databases show that the proposed model can significantly improve the performances of existing methods, at a very reasonable computational expense. |