What is it about?
How do we seamlessly switch from one view perspective to another, and what can this unique cognitive function tell us about memory and autonomous system design? We used robot simulations to understand the brain’s ability to link between a first-person experience and a global map. As the robot moved around the, an overhead camera acted like a map, providing a top-down view. Using tools from AI and machine learning, known as variational autoencoders, we were able to reconstruct the first-person view from the top-down view and vice versa. We observed that place-specific coding is more prevalent when linking a top-down view to a first-person view, and head direction selectivity is more prevalent in the other direction. This modeling brings a different approach to understanding transformations between perspectives and suggests testable predictions in the nervous system.
Featured Image
Photo by Jamie Street on Unsplash
Why is it important?
This study and the predictions it makes could provide insight on brain regions involved in memory and could help improve navigation systems in autonomous vehicles. It certainly has implications for understanding memory and why we get lost, which can be impacted by dementia and Alzheimer’s disease, and it has implications for drones, robots, and self-driving vehicles. An aerial drone with a top-down bird’s eye view could provide information to a robot on the ground. The robot on the ground could provide valuable information to the aerial drone. For self-driving cars, top-down map information could be incorporated into the car’s navigation and route planning system.
Perspectives
Read the Original
This page is a summary of: Linking global top-down views to first-person views in the brain, Proceedings of the National Academy of Sciences, November 2022, Proceedings of the National Academy of Sciences,
DOI: 10.1073/pnas.2202024119.
You can read the full text:
Contributors
The following have contributed to this page