@@ -509,10 +509,11 @@ impl<S: SpawnableScheduler<TH>, TH: TaskHandler> ThreadManager<S, TH> {
509
509
510
510
// Now, this is the main loop for the scheduler thread, which is a special beast.
511
511
//
512
- // That's because it's the most notable bottleneck of throughput. Unified scheduler's
513
- // overall throughput is largely dependant on its ultra-low latency characteristic,
514
- // which is the most important design goal of the scheduler in order to reduce the
515
- // transaction confirmation latency for end users.
512
+ // That's because it could be the most notable bottleneck of throughput in the future
513
+ // when there are ~100 handler threads. Unified scheduler's overall throughput is
514
+ // largely dependant on its ultra-low latency characteristic, which is the most
515
+ // important design goal of the scheduler in order to reduce the transaction
516
+ // confirmation latency for end users.
516
517
//
517
518
// Firstly, the scheduler thread must handle incoming messages from thread(s) owned by
518
519
// the replay stage or the banking stage. It also must handle incoming messages from
@@ -529,6 +530,11 @@ impl<S: SpawnableScheduler<TH>, TH: TaskHandler> ThreadManager<S, TH> {
529
530
// relies on the assumption that there's no considerable penalty arising from the
530
531
// unbatched manner of processing.
531
532
//
533
+ // Note that this assumption isn't true as of writing. The current code path
534
+ // underneath execute_batch() isn't optimized for unified scheduler's load pattern (ie.
535
+ // batches just with a single transaction) at all. This will be addressed in the
536
+ // future.
537
+ //
532
538
// These two key elements of the design philosophy lead to the rather unforgiving
533
539
// implementation burden: Degraded performance would acutely manifest from an even tiny
534
540
// amount of individual cpu-bound processing delay in the scheduler thread, like when
0 commit comments