TAGConv๏
- class dgl.nn.pytorch.conv.TAGConv(in_feats, out_feats, k=2, bias=True, activation=None)[source]๏
Bases:
Module
Topology Adaptive Graph Convolutional layer from Topology Adaptive Graph Convolutional Networks
where
denotes the adjacency matrix, its diagonal degree matrix, denotes the linear weights to sum the results of different hops together.- Parameters:
in_feats (int) โ Input feature size. i.e, the number of dimensions of
.out_feats (int) โ Output feature size. i.e, the number of dimensions of
.k (int, optional) โ Number of hops
. Default:2
.bias (bool, optional) โ If True, adds a learnable bias to the output. Default:
True
.activation (callable activation function/layer or None, optional) โ If not None, applies an activation function to the updated node features. Default:
None
.
- lin๏
The learnable linear module.
- Type:
torch.Module
Example
>>> import dgl >>> import numpy as np >>> import torch as th >>> from dgl.nn import TAGConv >>> >>> g = dgl.graph(([0,1,2,3,2,5], [1,2,3,4,0,3])) >>> feat = th.ones(6, 10) >>> conv = TAGConv(10, 2, k=2) >>> res = conv(g, feat) >>> res tensor([[ 0.5490, -1.6373], [ 0.5490, -1.6373], [ 0.5490, -1.6373], [ 0.5513, -1.8208], [ 0.5215, -1.6044], [ 0.3304, -1.9927]], grad_fn=<AddmmBackward>)
- forward(graph, feat, edge_weight=None)[source]๏
Description๏
Compute topology adaptive graph convolution.
- param graph:
The graph.
- type graph:
DGLGraph
- param feat:
The input feature of shape
where is size of input feature, is the number of nodes.- type feat:
torch.Tensor
- param edge_weight:
edge_weight to use in the message passing process. This is equivalent to using weighted adjacency matrix in the equation above, and
is based ondgl.nn.pytorch.conv.graphconv.EdgeWeightNorm
.- type edge_weight:
torch.Tensor, optional
- returns:
The output feature of shape
where is size of output feature.- rtype:
torch.Tensor