What is it about?

Providing explanations for deep machine learning decisions is a critical problem, especially in high-stakes application sectors such as healthcare, finance, or law enforcement. One way to address the explainability problem in Graph Neural Networks (GNNs) is counterfactual reasoning, where the objective is to change the GNN prediction by minimal changes in the input graph. Existing methods for the counterfactual explanation of GNNs are limited to instance-specific local reasoning. In this work, we study a novel problem: the global explainability of GNNs through global counterfactual reasoning. Specifically, we want to find a small set of representative counterfactual graphs that explains all input graphs. Drug discovery is one of the main applications of this work.

Featured Image

Why is it important?

Global reasoning helps reduce information overload regarding explainability and generates high-level explanations that help experts understand found examples better. For instance, in a drug discovery application scenario, our method can generate representative drug candidates for HIV, which can be later checked by drug discovery scientists.

Perspectives

The paper is a good starting point for understanding global and counterfactual reasoning in graph machine learning. Besides its novelty, it can also impact real-world applications such as drug discovery. So our method is beneficial for society too.

Mert Kosan
University of California Santa Barbara

Read the Original

This page is a summary of: Global Counterfactual Explainer for Graph Neural Networks, February 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3539597.3570376.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page