What is it about?
We propose a solution to the overreliance on labeled data and poor generalization of graph neural networks (GNNs) when trained in a supervised manner on computer network tasks. We exploit unlabeled, out-of-context data sources by pre-training a GNN in a self-supervised manner using link prediction. We show that a pretrained GNN needs less labeled data to learn a computer network task in a completely new setting.
Featured Image
Photo by Logan Voss on Unsplash
Why is it important?
We are the first to propose general, self-supervised pretraining of Graph Neural Networks for computer network tasks. Since quality labeled data is extremely scarce in the computer networks field, we believe a thorough large-scale version similar to what we propose is the only way for AI to make an actual impact in the field. Our results show the promise of this direction, though it should be noted a large path to actual operational implementation is still far away.
Perspectives
This paper, part of my master thesis, tries to tackle some actually relevant problems in the computer network field. Though experiments are small and results are merely indicative, the idea and early results should be taken as a strong indication for further research in this direction!
Louis Van Langendonck
Universitat Politecnica de Catalunya
Read the Original
This page is a summary of: Towards a Graph-based Foundation Model for Network Traffic Analysis, December 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3694811.3697817.
You can read the full text:
Contributors
The following have contributed to this page







