A No-Reference Video Quality Predictor for H.264 Compression and Scaling Artifacts


No-Reference (NR) video quality assessment (VQA) models, as opposed to full-reference (FR) models, are gaining immense popularity as they offer scope for broader applicability to user-uploaded video-centric services such as YouTube and Facebook, where the pristine references are unavailable. However, there are very few, well-performing NR-VQA models owing to the difficulty of the problem. In this paper, we propose a novel opinion unaware' NR video quality predictor that solely relies on thequality-aware' natural statistical models in the space-time domain. The proposed quality predictor called \textbf{S}elf-reference based \textbf{LE}arning-free \textbf{E}valuator of \textbf{Q}uality (SLEEQ) consists of three components: feature extraction in the spatial and temporal domains, motion-based feature fusion, and spatial-temporal feature pooling to derive a single quality score for a given video. We demonstrate the competence of the proposed model, which significantly outperforms the existing NR VQA models and competes very well with a leading human judgment trained FR VQA model.