CuGraphRelGraphConv๏
- class dgl.nn.pytorch.conv.CuGraphRelGraphConv(in_feat, out_feat, num_rels, regularizer=None, num_bases=None, bias=True, self_loop=True, dropout=0.0, apply_norm=False)[source]๏
Bases:
CuGraphBaseConvAn accelerated relational graph convolution layer from Modeling Relational Data with Graph Convolutional Networks that leverages the highly-optimized aggregation primitives in cugraph-ops.
See
dgl.nn.pytorch.conv.RelGraphConvfor mathematical model.This module depends on
pylibcugraphopspackage, which can be installed viaconda install -c nvidia pylibcugraphops=23.04.pylibcugraphops23.04 requires python 3.8.x or 3.10.x.Note
This is an experimental feature.
- Parameters:
in_feat (int) โ Input feature size.
out_feat (int) โ Output feature size.
num_rels (int) โ Number of relations.
regularizer (str, optional) โ
- Which weight regularizer to use (โbasisโ or
None): โbasisโ is for basis-decomposition.
Noneapplies no regularization.
Default:
None.- Which weight regularizer to use (โbasisโ or
num_bases (int, optional) โ Number of bases. It comes into effect when a regularizer is applied. Default:
None.bias (bool, optional) โ True if bias is added. Default:
True.self_loop (bool, optional) โ True to include self loop message. Default:
True.dropout (float, optional) โ Dropout rate. Default:
0.0.apply_norm (bool, optional) โ True to normalize aggregation output by the in-degree of the destination node per edge type, i.e. \(|\mathcal{N}^r_i|\). Default:
True.
Examples
>>> import dgl >>> import torch >>> from dgl.nn import CuGraphRelGraphConv ... >>> device = 'cuda' >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])).to(device) >>> feat = torch.ones(6, 10).to(device) >>> conv = CuGraphRelGraphConv( ... 10, 2, 3, regularizer='basis', num_bases=2).to(device) >>> etype = torch.tensor([0,1,2,0,1,2]).to(device) >>> res = conv(g, feat, etype) >>> res tensor([[-1.7774, -2.0184], [-1.4335, -2.3758], [-1.7774, -2.0184], [-0.4698, -3.0876], [-1.4335, -2.3758], [-1.4331, -2.3295]], device='cuda:0', grad_fn=<AddBackward0>)
- forward(g, feat, etypes, max_in_degree=None)[source]๏
Forward computation.
- Parameters:
g (DGLGraph) โ The graph.
feat (torch.Tensor) โ A 2D tensor of node features. Shape: \((|V|, D_{in})\).
etypes (torch.Tensor) โ A 1D integer tensor of edge types. Shape: \((|E|,)\). Note that cugraph-ops only accepts edge type tensors in int32, so any input of other integer types will be casted into int32, thus introducing some overhead. Pass in int32 tensors directly for best performance.
max_in_degree (int, optional) โ Maximum in-degree of destination nodes. It is only effective when
gis aDGLBlock, i.e., bipartite graph. Whengis generated from a neighbor sampler, the value should be set to the correspondingfanout. If not given,max_in_degreewill be calculated on-the-fly.
- Returns:
New node features. Shape: \((|V|, D_{out})\).
- Return type:
torch.Tensor