Blog
GNN training acceleration with BFloat16 data type on CPU
By Ilia Taraban, in blog
Graph neural networks (GNN) have achieved state-of-the-art performance on various industrial tasks. However, most GNN operations are memory-bound and require a significant amount of RAM. To tackle this problem well known technique to reduce tensor size by using small data type is proposed for memory efficiency optimization of GNN training on Intel® Xeon® Scalable processors with Bfloat16. The proposed approach could achieve outstanding optimization on various GNN models, covering a wide range of datasets, which speeds up the training by up to 5×.
Read more10 August
DGL 2.1: GPU Acceleration for Your GNN Data Pipeline
By Muhammed Fatih Balin, in release
The DGL 2.1 introduces GPU acceleration for the whole GNN data loading pipeline in GraphBolt, including the graph sampling and feature fetching stages.
Read more6 March
DGL 2.0: Streamlining Your GNN Data Pipeline from Bottleneck to Boost
The arrival of DGL 2.0 marks a significant milestone in the field of GNNs, offering substantial improvements in data loading capabilities.
Read more26 January
DGL 1.0: Empowering Graph Machine Learning for Everyone
We are thrilled to announce the arrival of DGL 1.0, a significant milestone of the past 3+ years of development.
Read more20 February