The results of running benchmarks are stored in .csv
files. To ease the download of reports, there is a separate .zip
file named reports.zip
that contains all the .csv
files with data.
Reports directories are structured on a per-size basis with three special reports for all input sizes:
report_classic
- all input sizes for classic streamreport_forkjoin
- all input sizes for fork/join streamreport_whole
- all input sizes for both classic and fork/join stream
Each report directory from the above contains 3 separate files:
averagetime.csv
- results for average time mode benchmarksthroughput.csv
- results for throughput mode benchmarkstotal.csv
- combined results for both modes
For the particular reports, I have two formats:
averagetime.csv
,throughput.csv
share one format, called the modes format.total.csv
has a separate format, called the total format.
The modes report contains eight columns:
- Label - name of the benchmark
- Input Size - benchmark input size
- Threads - number of threads used in benchmark from set {1, 4, 7, 8, 16, 19, 32}
- Mode - benchmark mode, either average time or throughput
- Cnt - the number of benchmark iterations, should always be equal to 20
- Score - actual results of benchmark
- Score Mean Error - benchmark measurement error
- Units - units of benchmark, either ms/op (for average time) or ops/ms (for throughput)
The total report contains ten columns:
- Label - name of the benchmark
- Input Size - benchmark input size
- Threads - number of threads used in benchmark from set {1, 4, 7, 8, 16, 19, 32}
- Cnt - the number of benchmark iterations, should always be equal to 20
- AvgTimeScore - actual results of benchmark for average time mode
- AvgTimeMeanError - benchmark measurement error for average time mode
- AvgUnits - units of benchmark for average time mode in ms/op
- ThroughputScore - actual results of benchmark
- ThroughputMeanError - benchmark measurement error for throughput mode
- ThroughputUnits - units of benchmark for throughput mode in ops/ms