All Stories

  1. Abstractive Visual Understanding of Multi-modal Structured Knowledge: A New Perspective for MLLM Evaluation
  2. Client-Server Co-design with Multi-modal Codebooks Makes Better and Faster Federate Knowledge Sharing
  3. Making Large Language Models Perform Better in Knowledge Graph Completion
  4. CREST: Cross-modal Resonance through Evidential Deep Learning for Enhanced Zero-Shot Learning
  5. NativE: Multi-modal Knowledge Graph Completion in the Wild
  6. MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid
  7. Knowledge-driven Visual Question Answering via Late Knowledge-to-Text Injection
  8. Disentangled Ontology Embedding for Zero-shot Learning
  9. OntoZSL: Ontology-enhanced Zero-shot Learning