What is it about?
Fairness-aware Graph Neural Networks (GNNs) often face a challenging trade-off, where prioritizing fairness may require compromising utility. In this work, we re-examine fairness through the lens of spectral graph theory, aiming to reconcile fairness and utility within the framework of spectral graph learning. We explore the correlation between sensitive features and spectrum in GNNs, using theoretical analysis to delineate the similarity between original sensitive features and those after convolution under different spectra.
Featured Image
Photo by Enes on Unsplash
Why is it important?
Although these fairnessaware GNNs reduce the dependence on sensitive features, utility is generally compromised. Various efforts have been devoted to developing fairness-aware GNNs, aiming to control the degree to which a model depends on sensitive features, measured by independence criteria such as statistical parity and equality opportunity. Different controlling techniques have been proposed, including weighting perturbation, embedding adjustment, pre-processing dataset, and loss function regularization. These methods aim to promote the fairness but often come with a trade-off in utility, generally measured by predicting accuracy.
Read the Original
This page is a summary of: FUGNN: Harmonizing Fairness and Utility in Graph Neural Networks, August 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3637528.3671834.
You can read the full text:
Contributors
The following have contributed to this page







