What is it about?

Graph collaborative filtering (GCF) has achieved exciting recommendation performance with its ability to aggregate high-order graph structure information. Recently, contrastive learning (CL) has been incorporated into GCF to alleviate data sparsity and noise issues. However, most of the existing methods employ random or manual augmentation to produce contrastive views that may destroy the original topology and amplify the noisy effects. We argue that such augmentation is insufficient to produce the optimal contrastive view, leading to suboptimal recommendation results. In this paper, we proposed a Learnable Model Augmentation Contrastive Learning (LMACL) framework for recommendation, which effectively combines graph-level and node-level collaborative relations to enhance the expressiveness of the collaborative filtering (CF) paradigm. Specifically, we first use the graph convolution networks (GCN) as a backbone encoder to incorporate multi-hop neighbors into graph-level original node representations by leveraging the high-order connectivity in user-item interaction graphs. At the same time, we treat the multi-head graph attention networks (GAT) as an augmentation view generator to adaptively generate high-quality node-level augmented views. Finally, joint learning endows the end-to-end training fashion. In this case, the mutual supervision and collaborative cooperation of GCN and GAT achieves learnable model augmentation.

Featured Image

Why is it important?

It effectively combines two commonly used networks and achieves better performance.

Perspectives

In the future, we may plan to extend the idea of model augmentation on hypergraphs and knowledge graphs for recommendation.

Xinru Liu

Read the Original

This page is a summary of: LMACL: Improving Graph Collaborative Filtering with Learnable Model Augmentation Contrastive Learning, ACM Transactions on Knowledge Discovery from Data, April 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3657302.
You can read the full text:

Read

Contributors

The following have contributed to this page