What is it about?

The brain has an ability to identify potentially salient temporal features within a hierarchically organized sequence without supervision. We developed biologically plausible, novel style of network-level computation. The significance of this study resides in that, by incorporating findings in the structure of actual neurons, the neural network learns to gate information transfers in a hierarchically organized manner.

Featured Image

Why is it important?

The brain's ability to learn salient temporal features within a hierarchically organized sequence underlies a wide variety of cognitive functions, including sensory processing, motor learning, memory processing, and language processing in humans and other species. An important feature of the brain's mechanism for segmenting sequence information is its context-dependence in learning. The neural mechanisms underlying this flexible unsupervised learning remain unclear.

Perspectives

We used our model for detecting patterned activation of neurons in large-scale data recorded from behaving animals. In our model, the accuracy and efficiency of learning do not drop with an increase in the data size, which is usually not the case in machine learning methods for similar purposes. These results show the remarkable benefit of our model. In large-scale neural recording, the need for efficient analysis methods has been growing.

Toshitake Asabuki

Read the Original

This page is a summary of: Neural circuit mechanisms of hierarchical sequence learning tested on large-scale recording data, PLoS Computational Biology, June 2022, PLOS,
DOI: 10.1371/journal.pcbi.1010214.
You can read the full text:

Read
Open access logo

Resources

Contributors

The following have contributed to this page