DGNConv
- class dgl.nn.pytorch.conv.DGNConv(in_size, out_size, aggregators, scalers, delta, dropout=0.0, num_towers=1, edge_feat_size=0, residual=True)[source]
Bases:
PNAConv
Directional Graph Network Layer from Directional Graph Networks
DGN introduces two special directional aggregators according to the vector field
, which is defined as the gradient of the low-frequency eigenvectors of graph laplacian.The directional average aggregator is defined as
The directional derivative aggregator is defined as
is the infinitesimal to keep the computation numerically stable.- Parameters:
in_size (int) – Input feature size; i.e. the size of
.out_size (int) – Output feature size; i.e. the size of
.List of aggregation function names(each aggregator specifies a way to aggregate messages from neighbours), selected from:
mean
: the mean of neighbour messagesmax
: the maximum of neighbour messagesmin
: the minimum of neighbour messagesstd
: the standard deviation of neighbour messagesvar
: the variance of neighbour messagessum
: the sum of neighbour messagesmoment3
,moment4
,moment5
: the normalized moments aggregation
dir{k}-av
: directional average aggregation with directions defined by the k-th
smallest eigenvectors. k can be selected from 1, 2, 3.
dir{k}-dx
: directional derivative aggregation with directions defined by the k-th
smallest eigenvectors. k can be selected from 1, 2, 3.
Note that using directional aggregation requires the LaplacianPE transform on the input graph for eigenvector computation (the PE size must be >= k above).
List of scaler function names, selected from:
identity
: no scalingamplification
: multiply the aggregated message by ,
where
is the in-degree of the node.attenuation
: multiply the aggregated message by
delta (float) – The in-degree-related normalization factor computed over the training set, used by scalers for normalization.
, where is the in-degree for each node in the training set.dropout (float, optional) – The dropout ratio. Default: 0.0.
num_towers (int, optional) – The number of towers used. Default: 1. Note that in_size and out_size must be divisible by num_towers.
edge_feat_size (int, optional) – The edge feature size. Default: 0.
residual (bool, optional) – The bool flag that determines whether to add a residual connection for the output. Default: True. If in_size and out_size of the DGN conv layer are not the same, this flag will be set as False forcibly.
Example
>>> import dgl >>> import torch as th >>> from dgl.nn import DGNConv >>> from dgl import LaplacianPE >>> >>> # DGN requires precomputed eigenvectors, with 'eig' as feature name. >>> transform = LaplacianPE(k=3, feat_name='eig') >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) >>> g = transform(g) >>> eig = g.ndata['eig'] >>> feat = th.ones(6, 10) >>> conv = DGNConv(10, 10, ['dir1-av', 'dir1-dx', 'sum'], ['identity', 'amplification'], 2.5) >>> ret = conv(g, feat, eig_vec=eig)
- forward(graph, node_feat, edge_feat=None, eig_vec=None)[source]
Description
Compute DGN layer.
- param graph:
The graph.
- type graph:
DGLGraph
- param node_feat:
The input feature of shape
. is the number of nodes, and must be the same as in_size.- type node_feat:
torch.Tensor
- param edge_feat:
The edge feature of shape
. is the number of edges, and must be the same as edge_feat_size.- type edge_feat:
torch.Tensor, optional
- param eig_vec:
K smallest non-trivial eigenvectors of Graph Laplacian of shape
. It is only required whenaggregators
contains directional aggregators.- type eig_vec:
torch.Tensor, optional
- returns:
The output node feature of shape
where should be the same as out_size.- rtype:
torch.Tensor