What is it about?

A new meta-learning based dynamic adaptive relation learning model (DARL) is proposed for few-shot KGC. For obtaining better semantic information of the meta knowledge, the proposed DARL model applies a dynamic neighbor encoder to incorporate neighbor relations into entity embedding. In addition, DARL builds attention mechanism based fusion strategy for different attributes of the same relation to further enhance the relation-meta learning ability.

Featured Image

Why is it important?

As artificial intelligence gradually steps into cognitive intelligence stage, knowledge graphs (KGs) play an increasingly important role in many natural language processing tasks. Due to the prevalence of long-tail relations in KGs, few-shot knowledge graph completion (KGC) for link prediction of long-tail relations has gradually become a hot research topic. Current few-shot KGC methods mainly focus on the static representation of surrounding entities to explore the potential semantic features of entities, while ignoring the dynamic properties among entities and the special influence of the long-tail relation on link prediction. This paper proposed a new meta-learning based dynamic adaptive relation learning model (DARL) for few-shot KGC.

Perspectives

(1) For the entity embedding, we design a dynamic neighbor encoder that dynamically assigns different weights to neighbor en-tities according to the task relations, and incorporates the neighbor relations into the entity embedding, which makes the meta knowl-edge have better semantic information. (2) For different attributes of the same relation, we propose a fusion strategy based on attention mechanism, which improves the performance of relation meta by weighted summation of different attributes of the same relation.

Prof. Linqin CAI

Read the Original

This page is a summary of: Meta-Learning Based Dynamic Adaptive Relation Learning for Few-Shot Knowledge Graph Completion, Big Data Research, August 2023, Elsevier,
DOI: 10.1016/j.bdr.2023.100394.
You can read the full text:

Read

Contributors

The following have contributed to this page