What is it about?
The classic Perception-Action cycle has proven to be effective in autonomous tasks where an agent (applied to robotics, a robot) must perform a task by itself. You perceive the environment and act accordingly. But when the task relies on collaborating with another agent, things get complicated. This is because we typically do not have access to the intention of the other agent. It can be argued that we can infer this intention by observing their behavior. However, just as we try to infer the intentions or desires of our fellow humans and end up making countless mistakes and causing a myriad of misunderstandings, the same is true for a robot (which also often represents the world in different ways than a human). To try to solve this, we add a new block to the cycle in charge of processing the intention of the other human agent. This includes both the implicit intention that can effectively be inferred from the behavior and the explicit intention that is explicitly stated. This opens the door to at least two considerations. First, that the robot can ask the human to explicitly state their intention (which we have found that the human willingly does and even increases trust in the robot). Second, that these two types of intent may be contradictory, necessitating the use of an additional block to process these discrepancies.
Featured Image
Why is it important?
This work is interesting because it allows us to put on the table the possibility of the robot asking directly to the human instead of seeking to use a more and more accurate inference engine that is able to infer more and more clues. This results in less resource consumption as well as (from the results we are getting) more trust in the robot. In the end this is because this is how we humans really work: when the uncertainty (or the cost of being wrong) is too high we accept to ask/ask for advice from our colleagues instead of trying to solve everything by ourselves.
Read the Original
This page is a summary of: Perception-Intention-Action Cycle as a Human Acceptable Way for Improving Human-Robot Collaborative Tasks, March 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3568294.3580149.
You can read the full text:
Contributors
The following have contributed to this page







