./run-docker.sh \
--model-url <MODEL_URL> \
--model-name <MODEL_NAME> \
--model-api-key <MODEL_API_KEY>
The benchmark results will be stored in the ./benchmark_root
directory relative to where you run the Docker command.
Examples
# run against liquid labs
./run-docker.sh \
--model-url https://inference-1.liquid.ai \
--model-name lfm-40b \
--model-api-key <MODEL_API_KEY>
# run on-prem
./run-docker.sh \
--model-url http://localhost:8000 \
--model-name lfm-40b \
--model-api-key <MODEL_API_KEY>
- Start with a new conda environment with
python=3.11
:
conda create -n ruler python=3.11
conda activate ruler
- Get the model provider URL and API key.
To run against Liquid labs
, get API key here.
- Run
./run-local.sh --model-url <MODEL_URL> --model-name <MODEL_NAME> --model-api-key <LIQUID_API_KEY>
to install necessary packages and run RULER.
Examples
# run against liquid labs
./run-local.sh \
--model-url https://inference-1.liquid.ai \
--model-name lfm-40b \
--model-api-key <MODEL_API_KEY>
# run on-prem
./run-local.sh \
--model-url http://localhost:8000 \
--model-name lfm-40b \
--model-api-key <MODEL_API_KEY>
The benchmark results will be stored in the ./benchmark_root
directory under the project root.
These parameters are available for both the run-docker.sh
and run-local.sh
scripts.
Parameter | Required | Description | Default |
---|---|---|---|
--model-url <SERVER_URL> |
Yes | Inference server URL base. | |
--model-name <MODEL_NAME> |
Yes | Model ID. | |
--model-api-key <API_KEY> |
Yes | Inference server API key. | |
--skip-install |
No | Skip dependency installation. Useful for re-running the script. | |
--num-samples <N> |
No | Number of samples to run. | 100 |
--ci |
No | Run in CI mode with as few tests as possible. |
Run the following command to create a symbolic link to the system's libstdc++.so.6
in the conda environment:
ln -sf /usr/lib/x86_64-linux-gnu/libstdc++.so.6 ${CONDA_PREFIX}/lib/libstdc++.so.6
This repository is modified from NVIDIA/RULER.