What is it about?
There is a scarcity of large labeled datasets in the community due to the high costs of annotation and privacy concerns. To address this, IMUTube was introduced. It is a system capable of extracting synthetic/virtual inertial measuring unit (IMU) data from videos of people performing activities. In the paper, we aimed to assess the utility of this virtual IMU data for recognizing activities that have more subtle movements, such as writing, eating, and driving. To evaluate this, we introduced a new metric, which gauges the subtlety of activities from videos using 2D pose and optical flow estimation. This metric was then used to correlate with the utility of the virtual IMU data in recognizing various daily activities and in detecting eating patterns.
Featured Image
Photo by Luke Chesser on Unsplash
Why is it important?
The extracted virtual IMU data from IMUTube has been shown to notably enhance the performance of classifiers for recognizing activities with coarse movements, like gym and locomotion activities. However, it was essential to understand its efficacy for activities with more nuanced movements. The research discovered that for recognizing activities with subtle movements, the addition of virtual IMU data doesn't offer significant benefits. This finding is crucial for future developments in activity recognition, guiding efforts and resources towards areas where they can have the most impact.
Perspectives
I hope this article will make people think more about the use of synthetic data for data augmentation, especially for fields that lacks large labeled dataset due to cost of annotation. We outlined the existing limitations of one of the approach for generating virtual IMU data in the paper and encourage the community to enhance existing methods or come up with new ways to generate data to allow for more robust and generalizable model training.
Zikang Leng
Georgia Institute of Technology
Read the Original
This page is a summary of: On the Utility of Virtual On-body Acceleration Data for Fine-grained Human Activity Recognition, October 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3594738.3611364.
You can read the full text:
Resources
Contributors
The following have contributed to this page







