Improving Spiking Dynamical Networks: Accurate Delays, Higher-Order Synapses, and Time Cells

Aaron R. Voelker, Chris Eliasmith
  • Neural Computation, March 2018, The MIT Press
  • DOI: 10.1162/neco_a_01046

Delaying concepts using an efficient neural code, detailed synapse models, and spiking neurons

What is it about?

This article presents a mathematical framework for implementing dynamical systems (i.e., differential equations), using detailed synapse models, in a recurrent spiking neural network. This framework is used to train a network to represent the history of concepts, as they continuously change over time, by delaying them in memory. This network optimally compresses the history into a scale-invariant low-dimensional state-space. This "temporal code" is systematically analyzed, and demonstrated to produce neural responses that are qualitatively and quantitatively similar to "time cells" recently discovered in rodents during a delay task.

Why is it important?

Brains must constantly deal with time-varying information in a dynamic environment. Meaningful actions depend not only on the current state of the world, but upon how that state is changing over time. The architecture presented in this article provides a mechanism that allows spiking neurons, coupled by detailed synapse models, to optimally represent the history of time-varying information. Moreover, the framework enables the computation of nonlinear functions across this rolling window of history. This establishes a bridge for understanding how a wide class of dynamic computations might relate to neural activity.

Perspectives

Aaron Voelker (Author)
University of Waterloo

Imagine yourself playing a video game, driving a car, or engaging in a physical sport. We often take for granted our brain's ability to fluidly interact with the world in such scenarios, where our actions continuously depend on the history of how multiple concepts are changing over time. The most basic sub-problem of accurately representing the history of a time-varying signal, driven externally or internally, in a spiking neural network, is deceptively challenging. We provide a flexible, efficient, and biologically plausible solution to this problem.

The following have contributed to this page: Aaron Voelker

In partnership with: