What is it about?
Sleep has long been shown to play a crucial role in stabilizing and re-organizing memories, and is suspected to be the primary mechanism behind how humans and other animals can continue to learn throughout life despite limited memory capacity. However, artificial neural networks, as used in machine learning, are known to have trouble with continuous learning, typically suffering catastrophic forgetting - a phenomena where training on a novel task completely overwrites previously learned memories. In this new work, we use a more biologically-realistic neural network than that typically used in machine learning to demonstrate that while these networks can suffer catastrophic forgetting when trained similarly to their more artificial counterparts, they can overcome this when training is interrupted with periods of sleep.
Featured Image
Why is it important?
Our findings demonstrate that sleep-like states may also be necessary for the further development of artificial intelligence systems. Significantly, they also demonstrate that the benefits of sleep are largely a consequence of the information which already exists in the network. This latter part is important since it suggests that there is no need to store specific training examples for later use during sleep-like periods, a strategy that is still largely the standard in machine learning.
Read the Original
This page is a summary of: Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation, PLoS Computational Biology, November 2022, PLOS,
DOI: 10.1371/journal.pcbi.1010628.
You can read the full text:
Contributors
The following have contributed to this page







