What is it about?
The Graph Neural Network (GNN) is showing outstanding results in improving the performance of graph-based applications. This paper introduces a method, Betty, to make GNN training more scalable and accessible via batch-level partitioning. Betty introduces two noveltechniques, redundancy-embedded graph (REG) partitioning and memory-aware partitioning, to effectively mitigate the redundancy and load imbalances issues across the partitions.
Featured Image
Why is it important?
Recent studies demonstrate that GNN performance can be boosted via using more advanced aggregators, deeper aggregation depth, larger sampling rate, etc. While leading to promising results, the improvements come at a cost of significantly increased memory footprint, easily exceeding GPU memory capacity.
Perspectives
It was a good experience to work with my mentor and co-authors and I have learned a lot. We are still have collaborations now. The work of this paper served as an effective ground for the future research. And I hope this paper can inspire your thinking.
Shuangyan Yang
University of California Merced
Read the Original
This page is a summary of: Betty: Enabling Large-Scale GNN Training with Batch-Level Graph Partitioning, January 2023, ACM (Association for Computing Machinery),
DOI: 10.1145/3575693.3575725.
You can read the full text:
Resources
Contributors
The following have contributed to this page







