Skip to content

Commit f808507

Browse files
Add directory structure samples
1 parent 91a71a0 commit f808507

File tree

328 files changed

+2545944
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

328 files changed

+2545944
-0
lines changed
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Sample README
2+
3+
Use this folder to include your submission code
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Sample README
2+
3+
Use this folder to include your submission code
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Sample README
2+
3+
Use this folder to include your submission code
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Sample README
2+
3+
Use this folder to include your submission code
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Sample README
2+
3+
Use this folder to include your submission documentation
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
# Sample README
2+
3+
Use this file to include your submission calibration
Lines changed: 94 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,94 @@
1+
# MLPerf Inference 5.1
2+
3+
## Setup
4+
5+
### Model and Dataset
6+
7+
Build the docker image for the benchmark by running the below command
8+
9+
```bash
10+
bash setup/build_model_and_dataset_env.sh
11+
```
12+
13+
Start the docker container for the benchmark by running the below command
14+
15+
```bash
16+
bash setup/start_model_and_dataset_env.sh
17+
```
18+
19+
Inside the docker, download the model with
20+
21+
```bash
22+
# Generate an access token on huggingface and set it here
23+
HUGGINGFACE_ACCESS_TOKEN="<your HF token goes here>" python download_model.py
24+
```
25+
26+
Inside the docker, download the dataset with
27+
28+
```bash
29+
bash download_llama2_70b.sh
30+
```
31+
32+
Inside the docker, quantize the model with
33+
34+
```bash
35+
bash quantize_llama2_70b.sh
36+
```
37+
38+
Exit the docker image, because a different image is needed for inference
39+
40+
## Inference
41+
42+
### Runtime tunables
43+
44+
To boost the machine's performance further, execute the following script before any performance test (should be set once after a reboot):
45+
46+
```bash
47+
bash setup/runtime_tunables.sh
48+
```
49+
50+
### Docker
51+
52+
```bash
53+
export MLPERF_IMAGE_NAME=rocm/mlperf-inference:submission_5.1-llama2_70b
54+
```
55+
56+
Build the docker image for the benchmark by running the below command
57+
58+
```bash
59+
bash setup/build_submission_llama2_70b.sh $MLPERF_IMAGE_NAME
60+
```
61+
62+
Start the docker container for the benchmark by running the below command
63+
64+
```bash
65+
bash setup/start_submission_env.sh $MLPERF_IMAGE_NAME
66+
```
67+
68+
### Running the benchmark
69+
70+
Run the following commands inside the docker container
71+
72+
``` bash
73+
## Performance
74+
python /lab-mlperf-inference/code/llama2-70b-99.9/main.py \
75+
--config-path /lab-mlperf-inference/code/llama2-70b-99.9/harness_llm/models/llama2-70b/ \
76+
--config-name interactive_mi300x \
77+
test_mode=performance \
78+
harness_config.device_count=8 \
79+
harness_config.user_conf_path=/lab-mlperf-inference/code/llama2-70b-99.9/user_mi300x.conf \
80+
harness_config.output_log_dir=/lab-mlperf-inference/results/llama2-70b/Interactive/performance/run_1
81+
82+
## Accuracy
83+
python /lab-mlperf-inference/code/llama2-70b-99.9/main.py \
84+
--config-path /lab-mlperf-inference/code/llama2-70b-99.9/harness_llm/models/llama2-70b/ \
85+
--config-name interactive_mi300x \
86+
test_mode=accuracy \
87+
harness_config.device_count=8 \
88+
harness_config.user_conf_path=/lab-mlperf-inference/code/llama2-70b-99.9/user_mi300x.conf \
89+
harness_config.output_log_dir=/lab-mlperf-inference/results/llama2-70b/Interactive/accuracy
90+
91+
### Evaluate accuracy
92+
bash /lab-mlperf-inference/code/llama2-70b-99.9/scripts/check_llama2_accuracy_scores.sh \
93+
/lab-mlperf-inference/results/llama2-70b/Interactive/accuracy/mlperf_log_accuracy.json
94+
```

tools/submission/directory_structure_samples/sample_1/closed/AMD/results/8xMI300X_2xEPYC_9575F/llama2-70b-99.9/Interactive/TEST06/accuracy/mlperf_log_accuracy.json

Lines changed: 102 additions & 0 deletions
Large diffs are not rendered by default.
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
First token check pass: True
2+
EOS check pass: True
3+
Sample length check pass: True
4+
TEST06 verification complete
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
2+
Results
3+
4+
{'rouge1': 44.4542, 'rouge2': 22.0419, 'rougeL': 28.6112, 'rougeLsum': 42.0341, 'gen_len': 29579723, 'gen_num': 24576, 'gen_tok_len': 7399913, 'tokens_per_sample': 301.1}
5+
6+
hash=8f6a1812b3b71c9e53e99a9f7d73df2a2179ca2aac62ef1ddda8400d744cfac6

0 commit comments

Comments
 (0)