What is it about?
In this paper, we introduced IMUGPT, a system that can autonomously generate unlimited amount of virtual IMU data without any manual effort using ChatGPT, motion synthesis models, and signal processing techniques. The methodology involves ChatGPT creating diverse textual descriptions of human activities. These textual descriptions are then processed by T2M-GPT to produce 3D human pose sequences. From these sequences, using inverse kinematics and IMUSim, virtual IMU data is extracted. Through testing, it was confirmed that virtual IMU data derived from IMUGPT significantly boosts the performance of classifiers on three key HAR datasets.
Featured Image
Photo by D koi on Unsplash
Why is it important?
IMUGPT's primary innovation is its capability to generate a comprehensive labeled dataset, offering a range and diversity unmatched by any existing dataset in the domain. This rich dataset is crucial for building exceptionally robust and generalizable models. By doing so, it has the potential to bring transformative changes to the HAR field, laying a foundation for advancements and breakthroughs.
Perspectives
In this work, we introduce a novel approach to data augmentation by synergistically combining generative models from natural language processing (NLP) and computer vision (CV). As we navigate an era where generative AI is rapidly emerging as a pivotal trend, our method provides a glimpse into its potential applications. We eagerly anticipate how the community will further leverage and evolve these techniques.
Zikang Leng
Georgia Institute of Technology
Read the Original
This page is a summary of: Generating Virtual On-body Accelerometer Data from Virtual Textual Descriptions for Human Activity Recognition, October 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3594738.3611361.
You can read the full text:
Contributors
The following have contributed to this page







