dgl.graphbolt.gpu_cached_featureο
- dgl.graphbolt.gpu_cached_feature(fallback_features: Feature | Dict[FeatureKey, Feature], max_cache_size_in_bytes: int) GPUCachedFeature | Dict[FeatureKey, GPUCachedFeature] [source]ο
GPU cached feature wrapping a fallback feature. It uses the least recently used (LRU) algorithm as the cache eviction policy.
Places the GPU cache to torch.cuda.current_device().
- Parameters:
- Returns:
The feature(s) wrapped with GPUCachedFeature.
- Return type:
Union[GPUCachedFeature, Dict[FeatureKey, GPUCachedFeature]]