Skip to content

Commit ce03e4b

Browse files
committed
final report: improve explanation of pipelined mode
Admit we're just giving a high level summary and cite our own previous work for a full detailed treatment. Also improve the explanation of the pipelining diagram.
1 parent 9a43c13 commit ce03e4b

File tree

1 file changed

+22
-9
lines changed

1 file changed

+22
-9
lines changed

doc/final-report/final-report.md

Lines changed: 22 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1075,15 +1075,28 @@ constraints:
10751075

10761076
These constraints leave room for concurrent and parallel execution, because they
10771077
allow overlapped execution of multiple batches in a pipelined way. The reason
1078-
why such an execution is possible is somewhat subtle though. The updates have to
1079-
be executed serially, but the lookups can be executed out of order, provided the
1080-
results we ultimately report for the lookups are correct. The trick is to
1081-
perform the lookups using an older value of the database and then adjust their
1082-
results using the updates from the later batches. This allows starting the
1083-
lookups earlier and thus having multiple lookups not only overlapping with each
1084-
other but also with updates. As an illustration, the following figure depicts
1085-
such pipelined execution and its dataflow for the case of four concurrent
1086-
pipeline stages, achieved using two cores with two threads running on each.
1078+
why such an execution is possible is somewhat subtle though. We provide a high
1079+
level summary here. For a full formal treatment see our previous work
1080+
[@utxo-db-api, Sections 7, 7.5].
1081+
1082+
The updates have to be executed serially, but the lookups can be executed out
1083+
of order, provided the results we ultimately report for the lookups are correct.
1084+
The trick is to perform the lookups using an older value of the database and
1085+
then adjust their results using the updates from the later batches. This allows
1086+
starting the lookups earlier and thus having multiple lookups not only
1087+
overlapping with each other but also with updates.
1088+
1089+
As an illustration, the following figure depicts such pipelined execution and
1090+
its dataflow for the case of four concurrent pipeline stages, achieved using
1091+
two cores with two threads running on each. The bars represent threads doing
1092+
work over time. The blue portions of the bars represents threads doing CPU
1093+
work, while the green portions represents threads waiting on I/O to complete.
1094+
The key observation from this diagram is that multiple cores can be submitting
1095+
and waiting on I/O concurrently in a staggered way. One can also observe that
1096+
there is the opportunity on a single core to overlap CPU work with waiting on
1097+
I/O to complete. Note however that this diagram is theoretical: it shows the
1098+
opportunity for concurrency given the data flows and plausible timings. It does
1099+
not show actual relative timings.
10871100

10881101
![Concurrency in the pipelined benchmark mode](pipelining.pdf)
10891102

0 commit comments

Comments
 (0)