What is it about?
In this work, we explore the use of user-generated freely available labels from web videos for video understanding. We create a benchmark dataset consisting of around 2 million videos with associated user-generated annotations and other meta information. We utilize the collected dataset for action classification and demonstrate its usefulness with existing small-scale annotated datasets, UCF101 and HMDB51. We study different loss functions and two pretraining strategies, simple and self-supervised learning. We also show how a network pretrained on the proposed dataset can help against video corruption and label noise in downstream datasets. We present this as a benchmark dataset in noisy learning for video understanding. The dataset, code, and trained models will be publicly available for future research.
Featured Image
Why is it important?
This work proposes a new large-scale benchmark dataset for video understanding from noisy data. The proposed dataset is collected using labels from standard video benchmarks, with useful surrounding meta information and all multi-labels corresponding to each data point without human verification. We demonstrated its usefulness in downstream action recognition tasks on two standard action classification benchmarks, UCF101 and HMDB51, and reported significant gains in top-1 accuracies. We also demonstrated an interesting robustness property against varying asymmetric label noise. We hope that this dataset serves as a benchmark for research in noisy learning for videos and is helpful for various multimedia tasks.
Read the Original
This page is a summary of: NoisyActions2M: A Multimedia Dataset for Video Understanding from Noisy Labels, December 2021, ACM (Association for Computing Machinery),
DOI: 10.1145/3469877.3490580.
You can read the full text:
Contributors
The following have contributed to this page







