EGATConvο
- class dgl.nn.pytorch.conv.EGATConv(in_node_feats, in_edge_feats, out_node_feats, out_edge_feats, num_heads, bias=True)[source]ο
Bases:
Module
Graph attention layer that handles edge features from Rossmann-Toolbox (see supplementary data)
The difference lies in how unnormalized attention scores
are obtained:where
are edge features, is weight matrix and is weight vector. After that, resulting node features are updated in the same way as in regular GAT.- Parameters:
in_node_feats (int, or pair of ints) β Input feature size; i.e, the number of dimensions of
. EGATConv can be applied on homogeneous graph and unidirectional bipartite graph. If the layer is to be applied to a unidirectional bipartite graph,in_feats
specifies the input feature size on both the source and destination nodes. If a scalar is given, the source and destination node feature size would take the same value.in_edge_feats (int) β Input edge feature size
.out_node_feats (int) β Output node feature size.
out_edge_feats (int) β Output edge feature size
.num_heads (int) β Number of attention heads.
bias (bool, optional) β If True, add bias term to
. Defaults:True
.
Examples
>>> import dgl >>> import torch as th >>> from dgl.nn import EGATConv
>>> # Case 1: Homogeneous graph >>> num_nodes, num_edges = 8, 30 >>> # generate a graph >>> graph = dgl.rand_graph(num_nodes,num_edges) >>> node_feats = th.rand((num_nodes, 20)) >>> edge_feats = th.rand((num_edges, 12)) >>> egat = EGATConv(in_node_feats=20, ... in_edge_feats=12, ... out_node_feats=15, ... out_edge_feats=10, ... num_heads=3) >>> #forward pass >>> new_node_feats, new_edge_feats = egat(graph, node_feats, edge_feats) >>> new_node_feats.shape, new_edge_feats.shape torch.Size([8, 3, 15]) torch.Size([30, 3, 10])
>>> # Case 2: Unidirectional bipartite graph >>> u = [0, 1, 0, 0, 1] >>> v = [0, 1, 2, 3, 2] >>> g = dgl.heterograph({('A', 'r', 'B'): (u, v)}) >>> u_feat = th.tensor(np.random.rand(2, 25).astype(np.float32)) >>> v_feat = th.tensor(np.random.rand(4, 30).astype(np.float32)) >>> nfeats = (u_feat,v_feat) >>> efeats = th.tensor(np.random.rand(5, 15).astype(np.float32)) >>> in_node_feats = (25,30) >>> in_edge_feats = 15 >>> out_node_feats = 10 >>> out_edge_feats = 5 >>> num_heads = 3 >>> egat_model = EGATConv(in_node_feats, ... in_edge_feats, ... out_node_feats, ... out_edge_feats, ... num_heads, ... bias=True) >>> #forward pass >>> new_node_feats, >>> new_edge_feats, >>> attentions = egat_model(g, nfeats, efeats, get_attention=True) >>> new_node_feats.shape, new_edge_feats.shape, attentions.shape (torch.Size([4, 3, 10]), torch.Size([5, 3, 5]), torch.Size([5, 3, 1]))
- forward(graph, nfeats, efeats, edge_weight=None, get_attention=False)[source]ο
Compute new node and edge features.
- Parameters:
graph (DGLGraph) β The graph.
nfeat (torch.Tensor or pair of torch.Tensor) β
If a torch.Tensor is given, the input feature of shape
where: is size of input node feature, is the number of nodes.- If a pair of torch.Tensor is given, the pair must contain two tensors of shape
and .
efeats (torch.Tensor) β
The input edge feature of shape
where: is size of input node feature, is the number of edges.edge_weight (torch.Tensor, optional) β A 1D tensor of edge weight values. Shape:
.get_attention (bool, optional) β Whether to return the attention values. Default to False.
- Returns:
pair of torch.Tensor β node output features followed by edge output features. The node output feature is of shape
The edge output feature is of shape where: is the number of heads, is size of output node feature, is size of output edge feature.torch.Tensor, optional β The attention values of shape
. This is returned only whenget_attention
isTrue
.