Skip to content

clarify metrics / collectors #80

Open
@markwort

Description

@markwort

Hey, I've been looking at benchbase, and playing around with it for a bit and of course I've read the \cite{DifallahPCC13} paper.

From that paper, I get the impression that oltpbench, and subsequently benchbase, can gather metrics, such as OS level metrics like disk throughput, disk IOPS (like in Figure 5 in the paper), or DB level metrics, like statistics about internal buffers (like in Figure 6).

When running benchbase, I end up with some files in the results directory, one of which contains some database internal metrics gathered at the end of the benchmark run (In my case, a JSON dump of most of the pg_stat_* views).
Looking at the code a little bit, I've come to the conclusion that there are no other metrics, and, in particular, no metrics per second.

Having more metrics would be very interesting, especially periodical metrics over the runtime of the benchmark, like it is suggested in the paper.
When using an external metrics gatherer care needs to be taken to align the OS/DB metrics with the benchmark metrics (req/sec, latencies) to the same timeframe.

I just want to confirm that I'm not missing the secret dial to turn on more exhaustive metrics before I start gathering metrics with external tools.

Please, for someone searching for this in the future, also tell us if such metrics gathering is even on the scope of this project, or if the recommendation is to use external tools.
I understand that metrics gathering itself is almost as big a challenge as benchmarking, there are so many different things to consider, everyone wants to see different things and they are all in different places in different operating systems or DBMS.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions