What is it about?

In this paper, we propose a graph representation learning method based on Prototype-aware contrastive learning and Masked Graph Auto-Encoding, named ProtoMGAE. Our model leverages three complementary objectives, i.e., masked feature reconstruction, clustering consistency, and representation contrasting, to capture graph information and learn node representations from macro, meso, and micro perspectives.

Featured Image

Why is it important?

Enhanced representations. We employ a masked graph modeling strategy to accommodate incomplete graphs with missing node features. Moreover, the contrastive objective of the online-target network pulls positive pairs aligned closely while ensuring the representations are uniformly distributed on the unit hypersphere. These strategies help us learn more robust and discriminative node representations. Performance improvement. Extensive experiments conducted on several datasets demonstrate that the proposed method achieves significantly better or competitive performance on downstream tasks, especially for graph clustering, compared with the state-of-the-art methods, showcasing its superiority in enhancing graph representation learning.

Perspectives

This work presents a preliminary attempt to leverage the advantages of both graph contrastive learning and masked graph auto-encoders in a self-supervised manner, demonstrating excellent performance. The future work based on this idea still holds promising avenues for further exploration.

Yimei Zheng
Beijing Jiaotong University

Read the Original

This page is a summary of: ProtoMGAE: Prototype-Aware Masked Graph Auto-Encoder for Graph Representation Learning, ACM Transactions on Knowledge Discovery from Data, April 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3649143.
You can read the full text:

Read

Contributors

The following have contributed to this page