What is it about?

Graph neural networks are powerful but hard to interpret. Existing explanation tools focus only on individual cases, which makes them overwhelming and less useful for broad understanding. We introduce GCFExplainer, a method that creates a small set of global, easy-to-understand explanations. This approach helps uncover how models behave overall, offers clearer guidance, and is more efficient and robust than current alternatives.

Featured Image

Read the Original

This page is a summary of: GCFExplainer: Global Counterfactual Explainer for Graph Neural Networks, ACM Transactions on Intelligent Systems and Technology, August 2025, ACM (Association for Computing Machinery),
DOI: 10.1145/3698108.
You can read the full text:

Read

Contributors

The following have contributed to this page