Blog

Home / Blog Page
blog

GNN training acceleration with BFloat16 data type on CPU

Graph neural networks (GNN) have achieved state-of-the-art performance on various industrial tasks. However, most GNN operations are memory-bound and require a significant amount of RAM. To tackle this problem well known technique to reduce tensor size by using small data type is proposed for memory efficiency optimization of GNN training on Intel® Xeon® Scalable processors with Bfloat16. The proposed approach could achieve outstanding optimization on various GNN models, covering a wide range of datasets, which speeds up the training by up to 5×.

Read more
blog

DGL 2.1: GPU Acceleration for Your GNN Data Pipeline

The DGL 2.1 introduces GPU acceleration for the whole GNN data loading pipeline in GraphBolt, including the graph sampling and feature fetching stages.

Read more
blog

DGL 2.0: Streamlining Your GNN Data Pipeline from Bottleneck to Boost

By DGLTeam, in release

The arrival of DGL 2.0 marks a significant milestone in the field of GNNs, offering substantial improvements in data loading capabilities.

Read more
blog

DGL 1.0: Empowering Graph Machine Learning for Everyone

By DGLTeam, in release

We are thrilled to announce the arrival of DGL 1.0, a significant milestone of the past 3+ years of development.

Read more