Skip to content

Commit 2546214

Browse files
authored
Merge pull request #70 from anomatomato/main
Reworded the explanation of benchmarking strategies
2 parents 22ce9e1 + f026887 commit 2546214

3 files changed

Lines changed: 11 additions & 11 deletions

File tree

.pre-commit-config.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ repos:
7575
exclude: .pre-commit-config.yaml
7676
- id: build-readme
7777
name: Build the README.md from the subfiles
78-
entry: python ./build.py
78+
entry: python3 ./build.py
7979
language: system
8080
always_run: true
8181
files: "README.md"

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -10371,11 +10371,11 @@ impressions; such an approach does not scale and lacks objectivity and
1037110371
reproducibility. Instead, you should define a concrete metric that accurately
1037210372
reflects your goals. One strategy is to carefully select benchmark instances
1037310373
that are still likely to be solved to optimality, with the expectation that
10374-
performance trends will generalize to larger instances. Another to decide for a
10375-
fixed time limit we are willing to wait for a solution, and then measure how
10376-
well each solver performs under these constraints. While no evaluation method
10377-
will be perfect, it is essential to remain aware of potential threats to the
10378-
validity of your results. Let us go through some common scenarios.
10374+
performance trends will generalize to larger instances. Another strategy is to
10375+
set a time limit for waiting for a solution, and then measure how well each
10376+
solver performs under these constraints. While no evaluation method will be
10377+
perfect, it is essential to remain aware of potential threats to the validity of
10378+
your results. Let us go through some common scenarios.
1037910379

1038010380
> [!NOTE]
1038110381
>

chapters/08_benchmarking.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -157,11 +157,11 @@ impressions; such an approach does not scale and lacks objectivity and
157157
reproducibility. Instead, you should define a concrete metric that accurately
158158
reflects your goals. One strategy is to carefully select benchmark instances
159159
that are still likely to be solved to optimality, with the expectation that
160-
performance trends will generalize to larger instances. Another to decide for a
161-
fixed time limit we are willing to wait for a solution, and then measure how
162-
well each solver performs under these constraints. While no evaluation method
163-
will be perfect, it is essential to remain aware of potential threats to the
164-
validity of your results. Let us go through some common scenarios.
160+
performance trends will generalize to larger instances. Another strategy is to
161+
set a time limit for waiting for a solution, and then measure how well each
162+
solver performs under these constraints. While no evaluation method will be
163+
perfect, it is essential to remain aware of potential threats to the validity of
164+
your results. Let us go through some common scenarios.
165165

166166
> :reference:
167167
>

0 commit comments

Comments
 (0)