What is it about?

Interpretability methods are more rigorous than explicability methods, we noticed that there are no reviews that include the explanation of all interpretability methods. So we performed a search in order to group interpretability methods into a single paper.

Featured Image

Why is it important?

In this review, in addition to listing The various methods we provide a classification of interpretability methods divided into strong interpretability and weak interpretability. In addition, we try to provide technical explanations that can help the reader in the development of other interpretability techniques as well as their use.

Perspectives

Writing this article allowed us to understand how important it is to evaluate models not only by their accuracy, but also by the degree of interpretability they are able to provide to the user when producing output.

Antonio Di Marino
Consiglio Nazionale delle Ricerche

Read the Original

This page is a summary of: Ante-Hoc Methods for Interpretable Deep Models: A Survey, ACM Computing Surveys, April 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3728637.
You can read the full text:

Read

Contributors

The following have contributed to this page