What is it about?

Interpretability in the context of Machine Learning (ML) models is a key aspect, especially for high-stakes real-world application where it is essential to figure out why a certain type of prediction has been performed. In this work, we develop a Genetic Programming-based human-in-the-loop system for learning tree-based ML models that should be potentially interpretable for the specific user that is using the system. Especially, we run a Genetic Programming (GP) evolution driven by a bi-objective function with NSGA-2 composed by: a qualitative measure for the given ML problem (e.g., Mean Squared Error for regression) and the interpretability of the model. The interpretability is estimated by a neural network that is online trained by using user feedback on evolved models during the optimization process itself. This approach is applicable to a broad range of possible ML problems as long as evolved solutions can be represented by using a tree-like structure. This is possible since we design different models encodings for the neural network estimator that are agnostic w.r.t. the specific problem domain.

Featured Image

Read the Original

This page is a summary of: An Analysis of the Ingredients for Learning Interpretable Symbolic Regression Models with Human-in-the-loop and Genetic Programming, ACM Transactions on Evolutionary Learning and Optimization, February 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3643688.
You can read the full text:

Read

Contributors

The following have contributed to this page