dgl.graphbolt.gpu_cached_feature๏ƒ

dgl.graphbolt.gpu_cached_feature(fallback_features: Feature | Dict[FeatureKey, Feature], max_cache_size_in_bytes: int) โ†’ GPUCachedFeature | Dict[FeatureKey, GPUCachedFeature][source]๏ƒ

GPU cached feature wrapping a fallback feature. It uses the least recently used (LRU) algorithm as the cache eviction policy.

Places the GPU cache to torch.cuda.current_device().

Parameters:
  • fallback_features (Union[Feature, Dict[FeatureKey, Feature]]) โ€“ The fallback feature(s).

  • max_cache_size_in_bytes (int) โ€“ The capacity of the GPU cache in bytes.

Returns:

The feature(s) wrapped with GPUCachedFeature.

Return type:

Union[GPUCachedFeature, Dict[FeatureKey, GPUCachedFeature]]