Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
benjamc authored Aug 18, 2024
1 parent 10cfa19 commit f21fdf3
Showing 1 changed file with 14 additions and 12 deletions.
26 changes: 14 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ Welcome to CARP-S!
This repository contains a benchmarking framework for optimizers.
It allows flexibly combining optimizers and benchmarks via a simple interface, and logging experiment results
and trajectories to a database.
carps can launch experiment runs in parallel by using [hydra](https://hydra.cc), which offers launchers for slurm/submitit, Ray, RQ, and joblib.

The main topics of this README are:
- [Installation](#installation)
Expand Down Expand Up @@ -121,17 +122,6 @@ You can run a certain problem and optimizer combination directly with Hydra via:
python -m carps.run +problem=... +optimizer=... seed=... -m
```
Another option is to fill the database with all possible combinations of problems and optimizers
you would like to run:
```bash
python -m carps.container.create_cluster_configs +problem=... +optimizer=... -m
```
Then, run them from the database with:
```bash
python -m carps.run_from_db
```
To check whether any runs are missing, you can use the following command. It will create
a file `runcommands_missing.sh` containing the missing runs:
```bash
Expand All @@ -151,6 +141,18 @@ Experiments with error status (or any other status) can be reset via:
python -m carps.utils.database.reset_experiments
```
### Running with Containers and Database
Another option is to fill the database with all possible combinations of problems and optimizers
you would like to run:
```bash
python -m carps.container.create_cluster_configs +problem=... +optimizer=... -m
```
Then, run them from the database with:
```bash
python -m carps.run_from_db
```
## Adding a new Optimizer or Benchmark
For instructions on how to add a new optimizer or benchmark, please refer to the contributing
guidelines for
Expand All @@ -170,4 +172,4 @@ For each scenario (blackbox, multi-fidelity, multi-objective and multi-fidelity-
Here we provide the link to the [meta data](https://drive.google.com/file/d/17pn48ragmWsyRC39sInsh2fEPUHP3BRT/view?usp=sharing)
that contains the detailed optimization setting for each run
and the [running results](https://drive.google.com/file/d/1yzJRbwRvdLbpZ9SdQN2Vk3yQSdDP_vck/view?usp=drive_link) that
records the running results of each optimization-benchmark combination.
records the running results of each optimization-benchmark combination.

0 comments on commit f21fdf3

Please sign in to comment.