What is it about?

Neural networks are increasingly employed to model, analyze, and control non-linear dynamical systems ranging from physics to biology. Owing to their universal approximation capabilities, they regularly outperform state-of-the-art model-driven methods in terms of accuracy, computational speed and/or control. On the other hand, neural networks are very often taken as black boxes whose explainability is challenged. In this paper, we analyze how neural networks successfully manage the longstanding challenge of classifying signals in chaotic or regular. We consider a neural network with an architecture which lends itself well for analysis, Large Kernel Convolutional Neural Networks (LKCNNs). We open its black box to reveal the underlying learning mechanisms.

Featured Image

Why is it important?

We have shown that to classify signals with high accuracy, LKCNNs use qualitative properties of the input sequence. This enables them to outperform classical methods. LKCNNs higher classification accuracy strongly emerges as we consider regular signals which are almost chaotic. We also investigated the emerging connection between input periodicity and periodicity within the network layers. We have shown this aspect to be paramount for performance. This could give new baseline requirements during neural network training.


We hope that this article will draw more attention to the inner workings of neural networks as there is a strong tendency to focus on their performance. Furthermore, approaches hinged on understanding the internal mechanisms can lead to new networks that are more transparent and have improved performance.

Thomas Geert de Jong
Kanazawa Daigaku

Read the Original

This page is a summary of: How neural networks learn to classify chaotic time series, Chaos An Interdisciplinary Journal of Nonlinear Science, December 2023, American Institute of Physics,
DOI: 10.1063/5.0160813.
You can read the full text:



The following have contributed to this page