What is it about?

This paper deals with the application focused on motion control for a cooperating YuMi robot. The use of sensors on gloves or other controls to control robots may restrict both operation of the robot and the operator’s activities. Therefore, research focusing on camera-based monitoring of body parts or hand gestures is becoming increasingly popular. The presented approach is based on using a camera to recognize hand gestures and implementing artificial intelligence to control a YuMi robot with Python using a TCP/IP connection. The program can be expanded by integrating other IoT devices to help control the robot and collect data useful for a needed application.

Featured Image

Why is it important?

We present two distinct systems: the first based on AlphaPose and the second on Halpe. The first represents a smart human-robot interface, which uses AlphaPose to detect the pose of the human operator, which the robot then mimics. AlphaPose processes the incoming image and creates an output JSON file that indicates the positions of the joints. Our interface then processes these, uses them to compute the target angles for the robot’s joints, and sends out the corresponding commands to the two servers that control its arms. As a result, the robot is able to mimic the pose of the operator’s arms (currently limited to poses on one plane but easily extensible to full 3D poses). The second interface uses the Halpe system to gain more detailed information about key points of the human hands. The paper proposes an algorithm that extracts a static hand gesture descriptor consisting of the hand classification (left/right) and the number and identity of outstretched fingers. The solution can bind different sets of commands to different hand gesture descriptors. When a particular static hand gesture is detected, the associated commands are communicated to the servers controlling the robot’s arms. The advantage is the control of the robot without direct manual programming or the use of other sensors restricting the movement of the operator. This eliminates direct contact with the operator, increases mobility, and removes restrictions. Our approach can work with other types of robots if the robot has similar degrees of freedom. Otherwise, the code would have to be modified. The limitation of our algorithm is the number of fingers (one control program per finger) and required hardware equipment. The future directions of research should be aimed at integrating our approach with ROS system, classification, and processing of cloud points of the operator’s hands and fusion of data from cameras (like integrated Cognex cameras in the YuMi robot and external RGBD camera).

Perspectives

The success of our algorithm depends mostly on the success of the Halpe subsystem, but under the right lighting conditions and while keeping in the distance range of 1.5–2.5 m from the camera, hand key points are detected reliably. The resulting solution can easily be expanded by integrating other IoT devices to help control the robot and collect data useful for any particular target application. For example, wireless proximity sensors could be used to gather information and help control the robot via LoRaWAN, as mentioned by Mihálik et al. [24].

Professor Ales Janota
University of Zilina

Read the Original

This page is a summary of: Human-Robot Motion Control Application with Artificial Intelligence for a Cooperating YuMi Robot, Electronics, August 2021, MDPI AG,
DOI: 10.3390/electronics10161976.
You can read the full text:

Read

Contributors

The following have contributed to this page