What is it about?

How do we seamlessly switch from one view perspective to another, and what can this unique cognitive function tell us about memory and autonomous system design? We used robot simulations to understand the brain’s ability to link between a first-person experience and a global map. As the robot moved around the, an overhead camera acted like a map, providing a top-down view. Using tools from AI and machine learning, known as variational autoencoders, we were able to reconstruct the first-person view from the top-down view and vice versa. We observed that place-specific coding is more prevalent when linking a top-down view to a first-person view, and head direction selectivity is more prevalent in the other direction. This modeling brings a different approach to understanding transformations between perspectives and suggests testable predictions in the nervous system.

Featured Image

Why is it important?

This study and the predictions it makes could provide insight on brain regions involved in memory and could help improve navigation systems in autonomous vehicles. It certainly has implications for understanding memory and why we get lost, which can be impacted by dementia and Alzheimer’s disease, and it has implications for drones, robots, and self-driving vehicles. An aerial drone with a top-down bird’s eye view could provide information to a robot on the ground. The robot on the ground could provide valuable information to the aerial drone. For self-driving cars, top-down map information could be incorporated into the car’s navigation and route planning system.

Perspectives

“If I look at the UCI campus map, I can place myself in the specific location where I expect to see certain buildings, crossroads, etc. Going the other way, given some campus buildings around me, I can place myself on the campus map,” says lead author Jinwei Xing, UCI cognitive sciences Ph.D. candidate. “We wanted to better understand how this computation is done in the brain.” “What we discovered in the AI model were things that looked like place cells, head direction cells, and cells responding to objects just like those observed in the brains of humans and rodents,” says co-author Jeff Krichmar. “Perspective switching is something we are constantly doing in our daily lives, but there is little experimental data trying to understand it. Our modeling study suggests a plausible solution to this problem.”

Jeff Krichmar
University of California Irvine

Read the Original

This page is a summary of: Linking global top-down views to first-person views in the brain, Proceedings of the National Academy of Sciences, November 2022, Proceedings of the National Academy of Sciences,
DOI: 10.1073/pnas.2202024119.
You can read the full text:

Read

Contributors

The following have contributed to this page