What is it about?

It is a novel framework able to provide explainable classification particularly effective on small and fragmented data sets, typical of domains where data collection is expensive or impractical (e.g., healthcare). CACTUS reduces features complexity and applies statistical principles to provide intuitive yet effective outcomes, able to outperform standard machine learning models when missing values are present.

Featured Image

Why is it important?

It is important to provide an explanation of the reasoning behind the classification. This allows for understanding which of the features are important for discriminating the classes and how their interplay is modelled. This degree of transparency is paramount for ensuring that the model performs without biases or erratic reasoning.

Perspectives

Helps to reassure people of the potential of AI technology.

Roger Woods
Queen's University Belfast

I think that this is a very flexible, light and straightforward tool to perform analyses on multiple levels. It provides very intuitive representations of the reality contained in the data set that can be further modelled and tailored to the user's needs. There are still many ideas to be explored and tested, but it was very stimulating to work on this project and contribute to the scientific literature on explainable AI.

Luca Gherardini
Sano Centre for Computational Personalised Medicine

Read the Original

This page is a summary of: CACTUS: A Comprehensive Abstraction and Classification Tool for Uncovering Structures, ACM Transactions on Intelligent Systems and Technology, April 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3649459.
You can read the full text:

Read

Contributors

The following have contributed to this page