What is it about?
With the growing popularity of mobile devices as well as video-sharing platforms (e.g., YouTube, Facebook, TikTok, and Twitch), User-Generated Content (UGC) videos have become increasingly common and now account for a large portion of multimedia traffic on the internet. Unlike professionally generated videos produced by filmmakers and videographers. Quality prediction of UGC videos is of paramount importance to optimize and monitor their processing in hosting platforms, such as their coding, transcoding, and streaming. In this work, we propose an accurate and efficient Blind Video Quality Assessment (BVQA) model for UGC videos, which we name 2BiVQA for double Bi-LSTM Video Quality Assessment.
Featured Image
Photo by Neal E. Johnson on Unsplash
Why is it important?
UGC videos contain typically multiple authentic distortions, generally introduced during capture and processing by naive users. However, blind quality prediction of UGC is quite challenging because the degradations of UGC videos are unknown and very diverse, in addition to the unavailability of pristine reference. The proposed 2BiVQA can take into account the features of UGC videos and mimic the behavior of the human visual system. We conducted comprehensive tests on four UGC-VQA datasets. Results showed that 2BiVQA outperforms stat-of-the-art methods on the considered datasets.
Perspectives
Read the Original
This page is a summary of: 2BiVQA: Double Bi-LSTM-based Video Quality Assessment of UGC Videos, ACM Transactions on Multimedia Computing Communications and Applications, December 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3632178.
You can read the full text:
Resources
Contributors
The following have contributed to this page