What is it about?

In this work a novel hybrid explainable semi-personalized federated learning model was proposed, utilizing the Shapley Values and Lipschitz Constant techniques to create personalized intelligent local models. This is achieved based on the needs and events that each user is required to address locally. In particular, the system in question provides clear explanations as to why the model made a specific decision on locally handled data. Then, it detects how the training of the intelligent model evolves, by dictating the hyperparameters that should be trained locally. This results in a model that responds optimally to the local problems it is called to face.

Featured Image

Why is it important?

This cutting-edge research proposal has never been proposed before in the relevant literature, and we believe that it has the potential to considerably extend the state-of-the-art in the field of explainable artificial intelligence.As demonstrated experimentally with this technique, an understanding is gained of how the model makes decisions and what interactions are performed between the features used, in order to achieve correct or incorrect classification. The model provides information about the interaction between the target response of a particular input and a feature of interest. Respectively, it allows for the personalization of the federated learning model for each user, so that only the necessary characteristics of the model are retrained, based on the respective needs and the events that it is called to respond. Thus, it offers the ability to manage, control and explain how to handle multiple intermediate representations, as well as more advanced features that may be related to the hierarchical organization of a neural system

Perspectives

The proposed system achieves a result with high accuracy with a white-box algorithm that is interpretable in itself. This is especially important in domains like medicine, defense, finance, and law where it is crucial to understand the decisions and build up trust in the algorithms.

Konstantinos Demertzis

Read the Original

This page is a summary of: An explainable semi-personalized federated learning model, Integrated Computer-Aided Engineering, August 2022, IOS Press,
DOI: 10.3233/ica-220683.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page