CopyTo๏
- class dgl.graphbolt.CopyTo(datapipe, device, extra_attrs=None)[source]๏
Bases:
IterDataPipeDataPipe that transfers each element yielded from the previous DataPipe to the given device. For MiniBatch, only the related attributes (automatically inferred) will be transferred by default. If you want to transfer any other attributes, indicate them in the
extra_attrs.Functional name:
copy_to.When
datahastomethod implemented,CopyTowill be equivalent tofor data in datapipe: yield data.to(device)
For
MiniBatch, only a part of attributes will be transferred to accelerate the process by default:When
seed_nodesis not None andnode_pairsis None, node related
task is inferred. Only
labels,sampled_subgraphs,node_featuresandedge_featureswill be transferred.When
node_pairsis not None andseed_nodesis None, edge/link
related task is inferred. Only
labels,compacted_node_pairs,compacted_negative_srcs,compacted_negative_dsts,sampled_subgraphs,node_featuresandedge_featureswill be transferred.Otherwise, all attributes will be transferred.
If you want some other attributes to be transferred as well, please
specify the name in the
extra_attrs. For instance, the following code will copyseed_nodesto the GPU as well:datapipe = datapipe.copy_to(device="cuda", extra_attrs=["seed_nodes"])
- Parameters:
datapipe (DataPipe) โ The DataPipe.
device (torch.device) โ The PyTorch CUDA device.
extra_attrs (List[string]) โ The extra attributes of the data in the DataPipe you want to be carried to the specific device. The attributes specified in the
extra_attrswill be transferred regardless of the task inferred. It could also be applied to classes other thanMiniBatch.