What is it about?

We introduced a novel approach that leverages language models as a foundational framework for knowledge representation learning, with generalizable applicability to steps along the recommendation pipeline. Our approach involves a decoder-only model training strategy that incorporates a training on generic paths to understand the overall structure of the knowledge graph, followed by fine-tuning with paths specific of the downstream step (knowledge completion and path reasoning in our case). This dual-phase strategy has shown improvements over 22 baselines in both Knowledge Completion and Recommendation.

Featured Image

Why is it important?

In our work, we focused on improving recommendation systems that rely on knowledge graphs by addressing a common challenge: the mismatch between representation methods for key tasks like knowledge completion (Link Prediction) and path reasoning (Recommendation). For this reason, we present a new decoder-only Transformer model designed for generalizable knowledge graph representation learning. By training on generic paths from the knowledge graph and then fine-tuning for specific downstream tasks.

Perspectives

I hope that this work will open up new research to models that are capable of generalise across different tasks.

Alessandro Soccol
Universita degli Studi di Cagliari

Read the Original

This page is a summary of: KGGLM: A Generative Language Model for Generalizable Knowledge Graph Representation Learning in Recommendation, October 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3640457.3691703.
You can read the full text:

Read

Resources

Contributors

The following have contributed to this page