What is it about?

The assumption of unstructured “noise,” i.e., that the complement of what is considered an interesting “signal” in a certain dataset is pure randomness, has pervaded analytical studies of algorithms for processing big data sets. However, this hypothesis is often too simplistic to capture realistic scenarios. We thus need to understand the role of the noise structure, namely the presence of correlations and dependencies within it, when performing signal extraction from corrupted data. We address this problem in a prototypical model where a simple matrix (i.e., an array) of numbers representing the sought information/signal is hidden in a complex matrix with non-trivial statistical dependencies among its entries, representing the unwanted noise. This problem is called Principal Components Analysis (PCA) and is probably the most frequently employed signal processing/data cleaning technique in all fields of science in order to make sense of noisy high-dimensional data. Despite its apparent simplicity, our model of PCA is theoretically challenging to analyse and captures features observed in real scenarios. We also propose a new algorithm able to saturate the performance limits predicted by our theory, by optimally capturing the statistical correlations present in the noise. The resulting picture shows that both signal and noise structure should be exploited jointly by algorithms in order to better clean the data at hand.

Featured Image

Why is it important?

Virtually all fields of science, technology and industries are nowadays relying on processing big data sets in some way. It is therefore essential to design efficient algorithms able to extract the relevant information (i.e., the "signal") hidden in the high-dimensional data at hand due to the presence of corrupting "noise", in the best possible way. Traditionally, emphasis has been put on algorithmically exploiting the structure of the signal only while assuming that the noise is pure randomness. This is however not realistic. This work is among the first to shift the focus by designing and analysing algorithms able to exploit the signal and noise structure together in order to better clean the data. This work offers a new set of analytic, algorithmic and conceptual ideas which will hopefully pave the way to more efficient algorithms for signal extraction appearing in various contexts. This perspective could have an impact in many applied fields ranging from neuroscience, biology, medicine and imagery, to physics (astronomy, particle physics, etc), finance or climate modelling.

Perspectives

This work started as a technical challenge: to prove that some already existing algorithms were optimal at signal extraction when the corrupting noise is structured. But it quickly became evident that the underlying framework was much richer than expected, and novel ideas were needed. It required us to combine tools from a wide range of disciplines. In addition to the new phenomena we uncovered, the most surprising to me was that we ended up showing the exact opposite than originally aimed at: current algorithms were actually sub-optimal. We understood why and were thus able to provide original procedures saturating the newly predicted theoretical limits. This was very exciting and unexpected, with a beautiful and coherent story unfolding as the research was advancing.

Jean Barbier
The Abdus Salam International Center for Theoretical Physics

What is noise? What is a datum? Our project started from these two simple questions. Typically data and noise are intertwined. Disentangling them can be a true challenge, and it is of major importance in all quantitative disciplines. As intuition suggests, exploiting correctly the nature of the two can be the key to success. In particular, we quickly realized that smart enough algorithms could in principle exploit patterns in the noise, or correlations, to extract data more efficiently. Yet, explaining how this is possible is not as easy as it seems. In my opinion, the most difficult part was to obtain an embryonal theory for simpler cases. Starting from that, brick by brick, we were finally able to give a complete picture, which required the introduction of brand new tools and merging techniques coming from Information Theory, Statistical Physics and Random Matrix Theory. This work was a true adventure. I really treasure those moments of revelation we had together, when you have the feeling that you have found the missing piece of the puzzle.

Francesco Camilli
The Abdus Salam International Center for Theoretical Physics

Read the Original

This page is a summary of: Fundamental limits in structured principal component analysis and how to reach them, Proceedings of the National Academy of Sciences, July 2023, Proceedings of the National Academy of Sciences,
DOI: 10.1073/pnas.2302028120.
You can read the full text:

Read

Contributors

The following have contributed to this page