What is it about?

In this paper, we demonstrate how a recently developed method for learning in so-called spiking neural networks can be tweaked to work on more complex real-world examples. Spiking neural networks are neural networks in which the neurons communicate rarely and with all-or-none events ("spikes"). This is more like how biological brains operate. The "Eventprop" algorithm was developed to train such networks using a maximisation procedure for the quality of the answers of the network. When we used the algorithm as originally developed on a speech recognition task, it failed. In the paper, we show how to change the way the quality of the output (the so-called loss) is measured so that Eventprop can train networks to recognise keywords successfully.

Featured Image

Why is it important?

The Eventprop algorithm was an important advance in training spiking neural networks, but if it only worked on initial diagnostic examples and not in more realistic problems, it would be of very limited use. With this paper, we demonstrate that with a bit of extra work, the algorithm can be used on larger-scale, more relevant examples.

Perspectives

This paper is the result of substantial work and a personal voyage of discovery into a somewhat different research direction that I undertook in a sabbatical in 2021/22. I feel that it is an important advance, though it might look more like a tweak on others' new method than a fundamental paradigm change.

Professor Thomas Nowotny
University of Sussex

Read the Original

This page is a summary of: Loss shaping enhances exact gradient learning with Eventprop in spiking neural networks, Neuromorphic Computing and Engineering, January 2025, Institute of Physics Publishing,
DOI: 10.1088/2634-4386/ada852.
You can read the full text:

Read

Contributors

The following have contributed to this page