What is it about?

The RBO dataset of articulated objects and interactions is a collection of 358 RGB-D video sequences (67:18 minutes) of humans manipulating 14 articulated objects under varying conditions (light, perspective, background, interaction). All sequences are annotated with ground truth of the poses of the rigid parts and the kinematic state of the articulated object (joint states) obtained with a motion capture system. We also provide complete kinematic models of these objects (kinematic structure and three-dimensional textured shape models). In 78 sequences the contact wrenches during the manipulation are also provided.

Featured Image

Why is it important?

Sensor data of interactions with articulated objects and ground truth is not included in any other dataset. Additionally, few datasets include articulated models. This information is crucial to understand how to manipulate these objects, test perception algorithms and learn to interact with these type of objects.

Perspectives

During my PhD I studied perception and manipulation of articulated objects (doors, drawers, boxes,...) as a concrete application for online interactive perception: problems that require interaction to extract information for manipulation. In my experiments, I always wanted to have ground truth of how people interact with this type of objects, how they move, how they look in the camera and even the forces necessary for the manipulation. We created this dataset with this purpose and I hope other roboticists and researchers in artificial perception can benefit of it.

Dr Roberto Martín-Martín
Stanford University

Read the Original

This page is a summary of: The RBO dataset of articulated objects and interactions, The International Journal of Robotics Research, April 2019, SAGE Publications,
DOI: 10.1177/0278364919844314.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page