CPUCachedFeatureο
- class dgl.graphbolt.CPUCachedFeature(fallback_feature: Feature, cache: CPUFeatureCache, offset: int = 0)[source]ο
Bases:
Feature
CPU cached feature wrapping a fallback feature. Use cpu_cached_feature to construct an instance of this class.
- Parameters:
fallback_feature (Feature) β The fallback feature.
cache (CPUFeatureCache) β A CPUFeatureCache instance to serve as the cache backend.
offset (int, optional) β The offset value to add to the given ids before using the cache. This parameter is useful if multiple `CPUCachedFeature`s are sharing a single CPUFeatureCache object.
- read(ids: Tensor | None = None)[source]ο
Read the feature by index.
- Parameters:
ids (torch.Tensor, optional) β The index of the feature. If specified, only the specified indices of the feature are read. If None, the entire feature is returned.
- Returns:
The read feature.
- Return type:
torch.Tensor
- read_async(ids: Tensor)[source]ο
Read the feature by index asynchronously.
- Parameters:
ids (torch.Tensor) β The index of the feature. Only the specified indices of the feature are read.
- Returns:
The returned generator object returns a future on
read_async_num_stages(ids.device)
th invocation. The return result can be accessed by calling.wait()
. on the returned future object. It is undefined behavior to call.wait()
more than once.- Return type:
A generator object.
Examples
>>> import dgl.graphbolt as gb >>> feature = gb.Feature(...) >>> ids = torch.tensor([0, 2]) >>> for stage, future in enumerate(feature.read_async(ids)): ... pass >>> assert stage + 1 == feature.read_async_num_stages(ids.device) >>> result = future.wait() # result contains the read values.
- read_async_num_stages(ids_device: device)[source]ο
The number of stages of the read_async operation. See read_async function for directions on its use. This function is required to return the number of yield operations when read_async is used with a tensor residing on ids_device.
- Parameters:
ids_device (torch.device) β The device of the ids parameter passed into read_async.
- Returns:
The number of stages of the read_async operation.
- Return type:
- size()[source]ο
Get the size of the feature.
- Returns:
The size of the feature.
- Return type:
torch.Size
- update(value: Tensor, ids: Tensor | None = None)[source]ο
Update the feature.
- Parameters:
value (torch.Tensor) β The updated value of the feature.
ids (torch.Tensor, optional) β The indices of the feature to update. If specified, only the specified indices of the feature will be updated. For the feature, the ids[i] row is updated to value[i]. So the indices and value must have the same length. If None, the entire feature will be updated.
- property cache_size_in_bytesο
Return the size taken by the cache in bytes.
- property miss_rateο
Returns the cache miss rate since creation.