dgl.graphbolt.gpu_cached_feature

dgl.graphbolt.gpu_cached_feature(fallback_features: Feature | Dict[FeatureKey, Feature], max_cache_size_in_bytes: int) β†’ GPUCachedFeature | Dict[FeatureKey, GPUCachedFeature][source]

GPU cached feature wrapping a fallback feature. It uses the least recently used (LRU) algorithm as the cache eviction policy.

Places the GPU cache to torch.cuda.current_device().

Parameters:
  • fallback_features (Union[Feature, Dict[FeatureKey, Feature]]) – The fallback feature(s).

  • max_cache_size_in_bytes (int) – The capacity of the GPU cache in bytes.

Returns:

The feature(s) wrapped with GPUCachedFeature.

Return type:

Union[GPUCachedFeature, Dict[FeatureKey, GPUCachedFeature]]