What is it about?

Neuronal diversity is not without rules. In particular the excitability, the activation functions, are consistently distributed according to a lognormal law, i.e. a power law, with a few high excitability neurons dominating over many low excitability neurons. Furthermore, this distribution is being actively maintained, which requires positive, Hebbian learning for intrinsic excitability (IE). Earlier ideas that the observed adaptation in excitability had a purely homeostatic function, i.e. only served to counteract abnormally high input or low input to adjust the neuron's function, turn out to be wrong. The current study shows without doubt that purely homeostatic adaptation is unable to maintain a lognormal distribution of IE – instead homeostatic adaptation smoothes over differences to produce a random (normal or Gaussian) distribution. Intrinsic adaptation must follow a Hebbian rule.

Featured Image

Why is it important?

Ever since John Hopfield started the physics revolution in neuroscience, statistical physics was the driving force behind the development of artificial neural networks. When applications took over later on, certain dogmas were firmly established. Statistical physics, like complexity theory, which is historically based on it, assumes that we deal with a large number of identical particles. Together they exhibit emergent properties. Neural networks were constructed from large numbers of identical units – neurons – which interact by learnable, trainable connections. Artificial neural networks became a branch of statistical pattern recognition with increasing sophistication and breadth of application. These ideas have been received wisdom also in the theory of neuroscience and formed a foundation for computational modeling for more than twenty years. It turns out the brain operates with different principles. Neurons are not identical, they are highly diverse, yet their excitability follows a clear law.

Perspectives

The data were all found in the existing literature, or directly obtained from experimental scientists. No new animals were used for this study.

Dr Gabriele Scheler
Carl Correns Foundation

Read the Original

This page is a summary of: Logarithmic distributions prove that intrinsic learning is Hebbian, F1000Research, October 2017, Faculty of 1000, Ltd.,
DOI: 10.12688/f1000research.12130.2.
You can read the full text:

Read
Open access logo

Contributors

The following have contributed to this page