What is it about?

Imagine you want to make a digital version of yourself move naturally in a virtual world, like in a video game or the Metaverse. The best way to do this is by capturing your actual movements and transferring them to the digital character. Usually, you'd need lots of sensors attached to your body to get this right. But more sensors mean more costs and inconvenience. Some solutions out there try to use fewer sensors, but they struggle to make the digital version move naturally. Think of it like trying to draw a detailed picture with limited information. In our study, we have created a new system called SparsePoser. This system uses only six devices to track your movements. We combined some deep learning with the data from those trackers to make the digital version of you move smoothly and naturally.

Featured Image

Read the Original

This page is a summary of: SparsePoser: Real-time Full-body Motion Reconstruction from Sparse Data, ACM Transactions on Graphics, October 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3625264.
You can read the full text:

Read

Contributors

The following have contributed to this page