What is it about?
Most of the graph pretraining model follow the“pre-train, fine-tune” learning strategy. The training objective gap between the constructed pretext and dedicated downstream tasks, and the objective engineering designs the pretext task requires both expert knowledge and tedious manual trials.
Featured Image
Photo by Robert Lukeman on Unsplash
Why is it important?
To bridge training objective gap, the graph prompting function modifies the standalone node into token pairs, and directly applies the pre-trained model without changing classification layer.
Read the Original
This page is a summary of: GPPT: Graph Pre-training and Prompt Tuning to Generalize Graph Neural Networks, August 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3534678.3539249.
You can read the full text:
Contributors
The following have contributed to this page







