What is it about?
The RBO dataset of articulated objects and interactions is a collection of 358 RGB-D video sequences (67:18 minutes) of humans manipulating 14 articulated objects under varying conditions (light, perspective, background, interaction). All sequences are annotated with ground truth of the poses of the rigid parts and the kinematic state of the articulated object (joint states) obtained with a motion capture system. We also provide complete kinematic models of these objects (kinematic structure and three-dimensional textured shape models). In 78 sequences the contact wrenches during the manipulation are also provided.
Featured Image
Photo by Ricardo Gomez Angel on Unsplash
Why is it important?
Sensor data of interactions with articulated objects and ground truth is not included in any other dataset. Additionally, few datasets include articulated models. This information is crucial to understand how to manipulate these objects, test perception algorithms and learn to interact with these type of objects.
Perspectives
Read the Original
This page is a summary of: The RBO dataset of articulated objects and interactions, The International Journal of Robotics Research, April 2019, SAGE Publications,
DOI: 10.1177/0278364919844314.
You can read the full text:
Resources
Contributors
The following have contributed to this page