Fixstars Amplify Benchmark is a framework for benchmarking the performances of solvers for quadratic unconstrained binary optimization problems (QUBO). It provides a command line interface to perform benchmarking and a definition of the benchmark problem.
The Fixstars Amplify SDK is used as the backend, allowing benchmarks to be run with many solvers such as quantum annealing machines, Ising machines, and mathematical optimization solvers. Benchmarks are run based on a job set file that defines the target problems, the number of runs, and the solvers to be used, making it easy to automate the process from benchmark execution to analyzing results.
The results of this library run can be loaded into the Amplify Benchmark Viewer to visualize the results in a web browser. A demo of the benchmark results for Amplify AE is here.
- Easy to run
- Parallel execution
- Automatic evaluation and analysis
- Benchmark result viewer is provided
- Customizable solver and problem parameters
- Formulations for several benchmark sets are pre-defined
- User-define problems can be added
Pre-defined benchmark sets:
- Traveling Salesperson Problem: TSPLIB
- Quadratic Assignment Problem: QAPLIB
- Max-CUT Problem: Gset
- Capacitated Vehicle Routing Problem: CVRPLIB
- Quadratic Problem: QPLIB
- Sudoku (logic-based combinatorial number-placement puzzle)
Supported solvers powered by Amplify SDK:
- Fixstars Amplify AE
- D-Wave Advantage
- Fujitsu Digital Annealer 3/4
- Toshiba SQBM+
- Gurobi Optimizer
- (See supported solvers of Amplify SDK)
| Objective value for execution time | Time To Solution (TTS) |
|---|---|
![]() |
![]() |
| Probability of obtaining a feasible solution | Probability of obtaining the best solution Rate |
|---|---|
![]() |
![]() |
| Table DATA |
|---|
![]() |
Amplify benchmark is provided as a Python (>=3.8) library. It can be installed with pip as follows:
$ pip install amplify-benchAfter installation, the amplify-bench command is enabled.
$ amplify-bench --help
Usage: amplify-bench [OPTIONS] COMMAND [ARGS]...
Options:
--help Show this message and exit.
Commands:
download Download all supported instance files in the specified...
run QUBO Benchmark
stats Generate QUBO benchmark stats data.To run a benchmark, you have to create a benchmark definition ("job set") file. The example/benchmark.yml file contains a sample job set file.
jobs:
- problem:
class: Tsp
instance: eil51
client:
- FixstarsClient
- token: INPUT_API_TOKEN
parameters:
timeout: 3000
num_samples: 2The benchmark job set file contains a list of benchmark jobs in YAML or JSON file format. The jobs consist of the number of runs, a list of problems to solve, and parameter values passed to the Client class to run. For this job set, it consists of the following benchmark jobs:
- target problem:
TSPLIB:eil51instance
- number of runs: 2
FixstarsClienttoken: INPUT_API_TOKENparameter.timeout: 3000
Now to start the benchmark using this job set file, run the amplify-bench command with the run subcommand with the path to the job set file.
Note Replace
INPUT_API_TOKENwith your API token. If you do not have an API token, go to Amplify WEB site and create an account.
$ amplify-bench run benchmark.yml
input_json: benchmark.yml
label: 20230803_223440
output: None
parallel: 1
aws_profile: None
dry_run: False
cli_benchmark_impl() 20230803_223440
2023-08-03 22:34:41,308 [pid:94470] [INFO]: 542.49 ms in amplify_bench.cli.parser.parse_input_data
2023-08-03 22:34:41,309 [pid:94470] [INFO]: make model of eil51
2023-08-03 22:34:41,519 [pid:94470] [INFO]: 209.08 ms in amplify_bench.problem.tsp.make_tsp_model
total jobs: 2
success jobs: 2
error jobs: 0
Jobs not yet started: 0When execution completes a JSON file is output as the result of the execution in the same directory. The file name is appended with the execution time by default. The output file path can be changed with the --output <path> option.
The results can be visualized using the Amplify Benchmark Viewer. To analyze for the viewer, give the stats subcommand with the path to a directory or a result file to the amplify-bench command.
$ amplify-bench stats preset_20230803_223440.jsonBy default, a report/data.json file is created in the current directory. The path directory of the output file can be changed with the --output option.
Then, drag and drop the created report/data.json file into the Amplify Benchmark Viewer GitHub pages to visualize the results.
Note The data is analyzed only on the browser and is not stored on the external server.
Drag and Drop data.json file |
Show the evaluated problem list | The detailed evaluation for each problem |
|---|---|---|
![]() |
![]() |
![]() |
A job set file consisted of JSON objects with the following keys. The schema of the job set file is described in amplify_bench/cli/schemas.
| key | type | description |
|---|---|---|
jobs |
array[JobObject] |
list of benchmark jobs |
variables |
object |
definitions of variables used in jobs (Optional) |
imports |
array[string] |
User-defined problem file path (Optional) |
Strings in the file that begin with $ are treated as variable names. Variables are first expanded by the environment variables at runtime, then the variable definitions given in the variables key are referenced in jobs. This is useful, for example, to specify a setting that is commonly used in multiple jobs
variables:
CLIENT:
- FixstarsClient
- parameters:
timeout: 3000
jobs:
- problem:
class: Tsp
instance: eil51
client: $CLIENT
num_samples: 2
- problem:
class: Tsp
instance: burma14
client: $CLIENT
num_samples: 1imports specifies the list of paths to the user-defined problem file. Enter the path as a relative path from a job set file, a relative path from the current directory, or an absolute path. For details on user-defined question files, see Create your own benchmark problems.
| key | type | description |
|---|---|---|
num_samples |
int |
the number of runs |
client |
array |
client name and parameters |
problem |
ProblemObject |
the problem definition |
matrix |
object[array] |
the definitions of variable patterns (Optional) |
If num_samples is an integer greater than 1, it will be run multiple times with the same settings. The matrix key is given with the patterns of variables explained later.
The client key is given an array of length 2. The first element of the array is the name of the client class, and the second element is an object with the property values of the client. For example, the following client configuration in the Amplify SDK
from amplify.client import FixstarsClient
client = FixstarsClient()
client.token = "INPUT_API_TOKEN"
client.parameters.timeout = 1000should be specified as follows in the job set file.
jobs:
- client:
- FixstarsClient
- token: INPUT_API_TOKEN
parameters:
timeout: 1000Note See the documentation for the available properties for each
Clientclass.
The problem key has an object consisting of the following keys:
| key | type | description |
|---|---|---|
class |
string |
The name of the problem class |
instance |
string |
Instance name |
parameters |
object |
Constructor parameters of the problem class |
The class is the name of the problem class contained in amplify_bench/problem. The predefined problem classes are Tsp (TSPLIB), Qap (QAPLIB), Cvrp (CVRPLIB), MaxCut (GSET), Sudoku and Qplib (QPLIB). The instance is the name of the instance in the problem set corresponding to each problem class. See amplify_bench/problem/data for details. Problem classes may have formulation parameters that can be passed to the constructor, which can be specified by parameter key.
Multiple jobs can be automatically generated for all combinations of multiple variable patterns given in a single job definition. For example, to run for all combinations of multiple instances and multiple runtimes, the following job set file can be used.
variables:
NUM_SAMPLES: 100
FIXSTARS:
- FixstarsClient
- token: INPUT_API_TOKEN
parameters:
timeout: $TIMEOUT
jobs:
- problem:
class: Qap
instance: $INSTANCE
client: $FIXSTARS
num_samples: $NUM_SAMPLES
matrix:
INSTANCE:
- esc32a
- sko56
TIMEOUT:
- 10000
- 30000The matrix is an array of values with the variable names as keys. In this case, jobs with timeout of 10000 and 30000 will be created for esc32a and sko56 respectively. Note that you can refer to the variables you pass to matrix in the variables defined in variables.
Note Variables are referenced recursively, but an infinite loop will fail.
User-defined formulations can be added as benchmark problems that are recognized in Amplify Benchmark.
The following example runs a benchmark against the MyTsp class defined in the mytsp.py file. Setting a list of Python file paths to the imports key will load additional probem classes defined in the files. A file path must be specified as an absolute path or relative to the job set file or the current directory.
imports:
- mytsp.py
jobs:
- problem:
class: MyTsp
instance: random8Note User-defined class names should not duplicate the built-in problem classes.
The problem class must extend the Problem class and implement the constructor (__init__) and the methods make_model and evaluate. The make_model method formulates the problem in Amplify SDK and the evaluate method evaluates the formulated model with the solution as input.
The following code snippet is an example of a MyTsp problem class.
mytsp.py
class MyTsp(Problem):
def __init__(
self,
instance: str,
constraint_weight: float = 1.0,
seed: int = 0,
path: Optional[str] = None,
):
super().__init__()
self._instance: str = instance
self._problem_parameters["constraint_weight"] = constraint_weight
if instance.startswith("random"):
self._problem_parameters["seed"] = seed
self._symbols = None
ncity, distances, locations, best_known = self.__load(self._instance, seed, path)
self._ncity = ncity
self._distances = distances
self._locations = locations # not used
self._best_known = best_known
def make_model(self):
symbols, model = make_tsp_model(self._ncity, self._distances, self._problem_parameters["constraint_weight"])
self._symbols = symbols
self._model = model
def evaluate(self, solution: SolverSolution) -> Dict[str, Union[None, float, str]]:
value: Optional[float] = None
path: str = ""
if solution.is_feasible:
spins = solution.values
variables = np.array(self._symbols.decode(spins)) # type: ignore
index = np.where(variables == 1)[1]
index_str = [str(idx) for idx in index]
value = calc_tour_dist(list(index), self._distances)
path = " ".join(index_str)
else:
pass
return {"label": "total distances", "value": value, "path": path}
...def __init__(self, instance: str, **kwargs) -> NoneThe constructor must accept at least an instance: str argument. Otherwise, if you add constraint_weight: float to the constructor arguments for example, the parameters key in ProblemObject can have a constraint_weight.
def make_model(self) -> NoneThe make_model method is responsible for formulating and storing an instance of the amplify.BinaryQuadraticModel class in self._model.
def evaluate(self, solution: amplify.SolverSolution) -> Dict[str, Union[None, float, str]]The evaluate method takes and evaluates a amplify.SolverSolution. The return value can be any key and value as Dict[str, Union[None, float, str]]. The return value is output to the objective_value key in the JSON file of the benchmark result.
Amplify Benchmark is an open source project. Contributions are welcome, including bug reports, feature additions, documentation improvements, etc.
The Amplify benchmark project ties together:







