What is it about?

Interpretability is a central challenge in Artificial Intelligence. Why does a model say what it says? What factors influence its decisions? What does it really learn? In our article, we explore these questions using a neural network that trains itself on a video of a physical system, constrained to maximize information compression. What does it retain? Using rigorous mathematics, we show that the network learns the flow of the underlying dynamics, preserving the topological structure of the original system.

Featured Image

Why is it important?

We present a method that tackles a key step in data driven model discovery: obtaining the variables of the system from recorded data. The data can be high-dimensional or a partial measurement. Our work sheds light on the interpretability and reliability of deep learning methods in data science.

Perspectives

While machine learning applications focus on how to get more out of an architecture, the problem of interpretability, in all its forms, is a mathematically interesting question, and with no exaggeration, of existential importance. Our article provides an important step in this direction.

Facundo Fainstein
Universidad de Buenos Aires

Read the Original

This page is a summary of: Reconstructing attractors with autoencoders, Chaos An Interdisciplinary Journal of Nonlinear Science, January 2025, American Institute of Physics,
DOI: 10.1063/5.0232584.
You can read the full text:

Read

Contributors

The following have contributed to this page