Skip to content

Commit 3e2737e

Browse files
emlinfacebook-github-bot
authored andcommitted
fix image link (#2943)
Summary: Pull Request resolved: #2943 the previous image link is wrong in md file. Reviewed By: TroyGarden Differential Revision: D74148866 fbshipit-source-id: 6b1ab3e09536ec21ee8207d4e203ff3397173e6a
1 parent 574dee9 commit 3e2737e

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

rfc/RFC-0002-assets/KV_storage_extension_Design.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Sarunya Pumma, Emma Lin, Ehsan K. Ardestani, Joe Wang
1010
# High Level Design
1111

1212
Considering design principles listed above, we have opted on a Key-Value API. TBE will offer a software managed cache in HBM, as we do when leveraging the host side memory. However, unlike extension to host side memory where we leverage UVA to prefetch, currently we opt for a copy based API, mandated by design rule \#2, to separate the implementation of the backend KV store, and TBE. It is possible that in future we might adopt a UVA based command queue approach interface with the KV-store, if the cost of the copy based semantics proves prohibitive.
13-
[![][image1]](https://github.com/pytorch/torchrec/blob/main/rfc/RFC-0002-assets/kv_tbe_training_high_level.png)
13+
[![image1]](./kv_tbe_training_high_level.png)
1414

1515
Figure 1: High level architecture of TBE KV Store based extension. The blocks with orange line are implemented by TBE.
1616

@@ -24,7 +24,7 @@ The Auxiliary buffers indicated in Figure 1 provide a scratch pad to stage the d
2424

2525
We do expect a training pipeline similar to EMO-DRAM to allow for overlapping prefetch(i+1) with train(i), where (i) denotes a training iteration number. This requires some extra work on the train pipeline to enable prefetch pipeline on top of SSD pipeline. The high-level workflow of pipeline prefetching is shown in the figure below.
2626

27-
[![][image2]](https://github.com/pytorch/torchrec/blob/main/rfc/RFC-0002-assets/kv_tbe_pipeline_prefetching.png)
27+
[![image2]](./kv_tbe_pipeline_prefetching.png)
2828

2929
# Handling Conflict Miss
3030

@@ -44,7 +44,7 @@ Similar to EMO+DRAM:
4444

4545
The detailed prefetch workflow is demonstrated in the figure below.
4646

47-
![][image3](https://github.com/pytorch/torchrec/blob/main/rfc/RFC-0002-assets/kv_tbe_prefetch_workflow.png)
47+
![image3](./kv_tbe_prefetch_workflow.png)
4848

4949
TBE will ensure a unified UVA buffer across prefetch and eviction flows.
5050

0 commit comments

Comments
 (0)