JOURNAL ARTICLE

Efficient Knowledge Graph Embedding Training Framework with Multiple GPUs

Ding SunZhen HuangDongsheng LiMin Guo

Year: 2022 Journal:   Tsinghua Science & Technology Vol: 28 (1)Pages: 167-175   Publisher: Tsinghua University Press

Abstract

When training a large-scale knowledge graph embedding (KGE) model with multiple graphics processing units (GPUs), the partition-based method is necessary for parallel training. However, existing partition-based training methods suffer from low GPU utilization and high input/output (10) overhead between the memory and disk. For a high 10 overhead between the disk and memory problem, we optimized the twice partitioning with fine-grained GPU scheduling to reduce the 10 overhead between the CPU memory and disk. For low GPU utilization caused by the GPU load imbalance problem, we proposed balanced partitioning and dynamic scheduling methods to accelerate the training speed in different cases. With the above methods, we proposed fine-grained partitioning KGE, an efficient KGE training framework with multiple GPUs. We conducted experiments on some benchmarks of the knowledge graph, and the results show that our method achieves speedup compared to existing framework on the training of KGE.

Keywords:
Computer science Parallel computing Speedup Embedding Partition (number theory) Graph partition CUDA Graphics Scheduling (production processes) Graph Overhead (engineering) Theoretical computer science Operating system Artificial intelligence

Metrics

5
Cited By
0.98
FWCI (Field Weighted Citation Impact)
35
Refs
0.74
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Graph Neural Networks
Physical Sciences →  Computer Science →  Artificial Intelligence
Recommender Systems and Techniques
Physical Sciences →  Computer Science →  Information Systems
Brain Tumor Detection and Classification
Life Sciences →  Neuroscience →  Neurology
© 2026 ScienceGate Book Chapters — All rights reserved.