What is it about?
We are researching ways to provide sufficient information to d/Deaf people, who use sign language(SL), through signing avatars. To animate signing avatars, 3D sign motion data is required. In this work, we propose a method that uses a diffusion model to generate 3D sign motions from the text and label prompts, enabling the generation of complex hand and body movements required to drive signing avatar.
Featured Image
Photo by Nic Rosenau on Unsplash
Why is it important?
We believe that signing avatar animation improves access to information for d/Deaf community because of the advantages of flexibility and immediate availability. However, driving signin avatar requires high-fidelty 3D motion data which is previously recorded using motion capture studio. That is time-consuming and takes a lot of cost. So it is important to create 3D sign motion in a easy way and our method can generate 3D sign motion without highly equipment like motion capture.
Perspectives
This research is on-going project now. And you often see that the generated motion sometimes doesn't match user prompting. Therefore, we are going to keep moving forward with our research to improve the performance of sign motion generation.
kohei hakozaki
NHK
Read the Original
This page is a summary of: Sign Motion Generation by Motion Diffusion Model, July 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3641234.3671023.
You can read the full text:
Contributors
The following have contributed to this page







