What is it about?

The Koopman operator provides a powerful framework for data-driven analysis of dynamical systems. In the last few years, a wealth of numerical methods providing finite-dimensional approximations of the operator have been proposed [e.g., extended dynamic mode decomposition (EDMD) and its variants]. While convergence results for EDMD require an infinite number of dictionary elements, recent studies have shown that only a few dictionary elements can yield an efficient approximation of the Koopman operator, provided that they are well-chosen through a proper training process. However, this training process typically relies on nonlinear optimization techniques. In this paper, we propose two novel methods based on a reservoir computer to train the dictionary. These methods rely solely on linear convex optimization. We illustrate the efficiency of the method with several numerical examples in the context of data reconstruction, prediction, and computation of the Koopman operator spectrum. These results pave the way for the use of the reservoir computer in the Koopman operator framework. The Koopman operator offers the possibility to turn nonlinear dynamical systems into linear ones. In this framework, dynamical systems can be studied with systematic linear techniques, and, in particular, they are amenable to spectral analysis. However, there is a price to pay. The Koopman operator is infinite-dimensional and must be approximated by a finite-rank operator (i.e., a matrix) as soon as numerical methods come into play. This approximation requires to choose a finite-dimensional subspace, a choice which is not necessarily appropriate since it is made a priori. Recent methods have been proposed using neural networks to “learn” the best finite-dimensional approximation subspace. The main drawback of these methods is that they rely on nonlinear optimization. In this paper, we propose to obtain a finite-dimensional approximation of the Koopman operator by using a reservoir computer. The reservoir computer is a specific recurrent neural network where only the weights of the nodes on the output layer are trained with the data, a training that can be performed with linear, convex optimization. Considering either the internal nodes or the output nodes of the reservoir computer to obtain the finite-dimensional approximation subspace, we derive two novel methods that compute a finite-dimensional approximation of the Koopman operator.

Featured Image

Why is it important?

While convergence results for EDMD require an infinite number of dictionary elements, recent studies have shown that only few dictionary elements can yield an efficient approximation of the Koopman operator, provided that they are well-chosen through a proper training process. However, this training process typically relies on nonlinear optimization techniques. In this paper, we propose two novel methods based on a reservoir computer to train the dictionary. These methods rely solely on linear convex optimization.

Perspectives

The proposed methods could be used on real datasets, in the context of spectral analysis, network identification, time-series classification, event detection, and predictive control.

Marvyn Vincenzo Gulina
Universite de Namur

Read the Original

This page is a summary of: Two methods to approximate the Koopman operator with a reservoir computer, Chaos An Interdisciplinary Journal of Nonlinear Science, February 2021, American Institute of Physics, DOI: 10.1063/5.0026380.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page