What is it about?
This study represents a cognitive architecture for learning to control the attention of an artificial agent (Attention Control Learner: ACL) . The main idea developed in this study that makes it different from the previous related ones is that utilization of state estimation together with attention control during learning process will improve the performance of the task.
Featured Image
Why is it important?
This study represents us the ways to face the limitations of a learning system such as time and processing power. As an example how an agent can continue to survive and perform in a task when it loses parts of its facilitations such as its sensors. Another aspect of this study is its resemblance to the human cognition and learning of different tasks in different conditions and with limitations. For instance how we as human perform in a task like driving a car when part of our sensory path are damaged (e.g., Side mirrors are broken). Attention control and state estimation are the two main elements elaborated in this research and as the results show improve the performance of the system.
Perspectives
This study represents a cognitive architecture that has many similarities to the human while learning, attention control and state estimation are among human cognitive functions. It suggests ways to deal with the limitations of a particular system in facing the time and processing power which has never been unlimited/ infinite in any intelligent agent. It helps the agent to survive and the system to perform longer. The proposed methods also have an effect in improving the task performance. The variable sampling rate is another interesting aspects of this study which occurs during learning. The agent's attention to different parts of its divided sensory inputs has a variable sampling rate and it is based on the agent's need. It is unique because in most of other systems there is a fixed and invariable sampling rate of sensory input data.
Zahra Gharaee
Read the Original
This page is a summary of: Attention control learning in the decision space using state estimation, International Journal of Systems Science, August 2014, Taylor & Francis,
DOI: 10.1080/00207721.2014.945982.
You can read the full text:
Contributors
The following have contributed to this page







