What is it about?

In this article, an adjustable autonomy framework is proposed for the human–robot collaboration (HRC) in which a robot uses a reinforcement learning mechanism guided by a human operator’s rewards in an initially unknown workspace. Within the proposed framework, the autonomy level of the robot is automatically adjusted in an HRC setting that is represented by a Markov decision process model. When the robot reaches higher performance levels, it can operate more autonomously in the sense that it needs less human operator intervention. A novel Q -learning mechanism with an integrated ϵ -greedy approach is implemented for robot learning in order to capture the correct actions and robot’s mistakes as a basis for adjusting the robot’s autonomy level. The proposed HRC framework can adapt to changes in the workspace as well as changes in the human operator reward (scaling and shifting) mechanism, and can always adjust the autonomy level. The autonomy level of the robot is automatically lowered when the workspace changes to allow the robot to explore new actions in order to adapt to the new workspace. In addition, the human operator has the ability to reset/lower the autonomy level of the robot to enforce the robot to relearn the workspace if its performance is not satisfactory for the human operator. The developed algorithm is applied to a realistic HRC setting involving a humanoid robot, named Baxter. The experimental results are analyzed to assess the effectiveness of the proposed adjustable autonomy framework for different cases: for the case when the workspace does not change, then for the case when the robot autonomy level is reset/lowered by a human operator, and for the case when the workspace is changed by the introduction of new objects. The results confirm the capability of the developed framework to successfully adjust the autonomy level in response to changes in the human operator’s commands or the workspace.

Featured Image

Why is it important?

This article introduces a crucial innovation in the realm of human-robot collaboration (HRC) by proposing an Adjustable Autonomy Framework. In this framework, a robot employs reinforcement learning under the guidance of a human operator's rewards in an initially unknown workspace. The core concept lies in the automatic adjustment of the robot's autonomy level within the HRC setting, represented by a Markov decision process model. The significance of this framework becomes evident in its ability to adapt to dynamic scenarios. As the robot achieves higher performance levels, it gains the capacity to operate more autonomously, necessitating less human intervention. The incorporation of a novel Q-learning mechanism, complemented by an integrated ϵ-greedy approach, enables the robot to learn from its actions and mistakes, serving as the foundation for autonomy level adjustments. The framework's adaptability extends to changes in the workspace or modifications in the human operator's reward mechanism. Notably, when the workspace changes, the autonomy level automatically decreases, allowing the robot to explore new actions and adapt. The human operator retains the authority to reset or lower the autonomy level, compelling the robot to relearn the workspace if its performance falls short. Applied to a realistic HRC scenario involving the humanoid robot Baxter, the experimental results affirm the effectiveness of the Adjustable Autonomy Framework. Whether the workspace remains stable, the robot's autonomy is reset by the operator, or the workspace undergoes changes, the framework demonstrates its capability to successfully adjust autonomy levels, showcasing its importance in addressing the complexities of real-world human-robot collaboration.

Perspectives

Innovation in Autonomy Adjustment: The framework's automatic adjustment of robot autonomy based on the human operator's rewards is innovative. This allows the robot to learn from human feedback, enabling it to operate more autonomously as it gains experience and performs better. Adaptability to Workspace Changes: The ability of the framework to adapt to changes in the workspace is commendable. Lowering the autonomy level when the workspace changes allows the robot to explore new actions, showcasing flexibility in real-world scenarios. Q-learning Mechanism: The integration of a Q-learning mechanism with an ε-greedy approach is a robust choice. This approach is effective for capturing both correct actions and mistakes, forming a solid foundation for adjusting the robot's autonomy level. Human Intervention Control: The inclusion of a mechanism for the human operator to reset or lower the autonomy level demonstrates a thoughtful consideration for user control. This ensures that the human operator maintains authority and can intervene when necessary. Experimental Validation with Baxter: Applying the algorithm to a humanoid robot like Baxter in a realistic HRC setting adds practical value to the study. The experimental results provide insights into the framework's effectiveness under different conditions, such as unchanged workspace, human-initiated resets, and workspace changes. Scalability and Generalizability: It would be interesting to understand how scalable and generalizable the proposed framework is to different robotic systems, tasks, and environments. Discussing potential limitations and areas for future improvements could enhance the paper. Ethical Considerations: Consider discussing ethical considerations related to the interaction between robots and humans. For instance, addressing the potential impact of autonomous robots in various settings and ensuring safety measures could strengthen the ethical foundation of the proposed framework. Comparisons with Existing Approaches: If applicable, you might consider comparing the proposed framework with existing approaches in the literature. Highlighting key differentiators and advantages over other methods can provide context and contribute to the significance of the work

Md Khurram Monir Rabby

Read the Original

This page is a summary of: A Learning-Based Adjustable Autonomy Framework for Human–Robot Collaboration, IEEE Transactions on Industrial Informatics, September 2022, Institute of Electrical & Electronics Engineers (IEEE),
DOI: 10.1109/tii.2022.3145567.
You can read the full text:

Read

Contributors

The following have contributed to this page