What is it about?
Industry 4.0 aims to make collaborative robotics accessible and effective inside factories. Human–robot interaction is enhanced by means of advanced perception systems which allow a flexible and reliable production. We developed a novel visual servoing system, based on a machine learning technique, for the automation of the winding of copper wire during the production of electric motors. The visual servoing function is synthesized using the Gaussian mixture model (GMM) machine learning system, which guarantees an extremely fast response. The system has been developed and tested for a path following application on an aluminium bar to simulate the real stator teeth of a generic electric motor. Experimental results demonstrate that the proposed method is able to reproduce the visual servoing function with a minimal error while guaranteeing extremely high working frequency.
Featured Image
Why is it important?
Visual servoing is widely used while manipulating or inspecting objects. This tool can be decoupled into two modules: one is in charge of perception and the other handles robot motion. We developed a visual servoed path following system in order to scan the slot of the extrusion bar for simulating the wire deployment in the real stator teeth. We took care that the tool pin was kept continuously inserted in the gap at a fixed height and orientation, and avoid collisions ensuring an high control rate.
Perspectives
Our innovative approach overcomes the bottleneck of the general class of image-based visual servoing (IBVS) systems leading to a strong decrease of the processing time, bringing the working frequency of the visual servoing system close to the typical values of robotic systems.
Stefano Michieletto
Universita degli Studi di Padova
Read the Original
This page is a summary of: A machine learning-based visual servoing approach for fast robot control in industrial setting, International Journal of Advanced Robotic Systems, November 2017, SAGE Publications,
DOI: 10.1177/1729881417738884.
You can read the full text:
Resources
Contributors
The following have contributed to this page