This repo contains examples of SystemsLab benchmark configurations from real-world projects. A config can be thought of as a recipe for reproducing a benchmark, and bundles together what would otherwise be a collection of shell scripts, along with some useful metadata.
Configs can also be thought of as templates, and are parameterizable: for example, the llama-bench config is templated to allow customizing the ML model, model quantization method, GPU, and power limit.
Once a benchmark configuration has been designed, it can be easily be used to execute an automated benchmark sweep across a specified parameter space, or re-run periodically or triggered by external events.